For OpenAI, o1 represents a step toward its broader goal of human-like artificial intelligence. More practically, it does a better job at writing code and solving multistep problems than previous models. But it’s also more expensive and slower to use than GPT-4o. OpenAI is calling this release of o1 a “preview” to emphasize how nascent it is.

The training behind o1 is fundamentally different from its predecessors, OpenAI’s research lead, Jerry Tworek, tells me, though the company is being vague about the exact details. He says o1 “has been trained using a completely new optimization algorithm and a new training dataset specifically tailored for it.”

OpenAI taught previous GPT models to mimic patterns from its training data. With o1, it trained the model to solve problems on its own using a technique known as reinforcement learning, which teaches the system through rewards and penalties. It then uses a “chain of thought” to process queries, similarly to how humans process problems by going through them step-by-step.

At the same time, o1 is not as capable as GPT-4o in a lot of areas. It doesn’t do as well on factual knowledge about the world. It also doesn’t have the ability to browse the web or process files and images. Still, the company believes it represents a brand-new class of capabilities. It was named o1 to indicate “resetting the counter back to 1.”

I think this is the most important part (emphasis mine):

As a result of this new training methodology, OpenAI says the model should be more accurate. “We have noticed that this model hallucinates less,” Tworek says. But the problem still persists. “We can’t say we solved hallucinations.”

  • Chozo@fedia.io
    link
    fedilink
    arrow-up
    2
    ·
    7 days ago

    Technophobes are trying to downplay this because “AI bad”, but this is actually a pretty significant leap from GPT and we should all be keeping an eye on this, especially those who are acting like this is just more auto-predict. This is a completely different generation process than GPT which is just glorified auto-predict. It’s the difference between learning a language by just reading a lot of books in that language, and learning a language by speaking with people in that language and adjusting based on their feedback until you are fluent.

    If you thought AI comments flooding social media was already bad, it’s soon going to get a lot harder to discern who is real, especially once people get access to a web-connected version of this model.

    • LANIK2000@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      5 days ago

      Big leap for OpenAI, as in a kind of ML model they haven’t explored yet. Not that big for AI in general as others have done the same with similar result. Until they can make graphs where they look exceptionally better compared to other models than their own, it’s not that much of a breakthrough.

    • Voroxpete@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      0
      ·
      6 days ago

      It’s weird how so many of these “technophobes” are IT professionals. Crazy that people would line up to go into a profession they so obviously hate and fear.

      • Chozo@fedia.io
        link
        fedilink
        arrow-up
        0
        ·
        6 days ago

        I’ve worked in tech for 20 years. Luddites are quite common in this field.

        • Voroxpete@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          0
          ·
          6 days ago

          Read some history mate. The luddites weren’t technophobes either. They hated the way that capitalism was reaping all the rewards of industrializion. They were all for technological advancement, they just wanted it to benefit everyone.

          • Chozo@fedia.io
            link
            fedilink
            arrow-up
            0
            ·
            6 days ago

            I’m using the current-day usage of the term, but I think you knew that.

    • BetaDoggo_@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      7 days ago

      All signs point to this being a finetune of gpt4o with additional chain of thought steps before the final answer. It has exactly the same pitfalls as the existing model (9.11>9.8 tokenization error, failing simple riddles, being unable to assert that the user is wrong, etc.). It’s still a transformer and it’s still next token prediction. They hide the thought steps to mask this fact and to prevent others from benefiniting from all of the finetuning data they paid for.

      • Echo Dot@feddit.uk
        link
        fedilink
        English
        arrow-up
        0
        ·
        6 days ago

        They hide the thought steps to mask this fact and to prevent others from benefiting from all of the finetuning data they paid for.

        Well possibly but they also hide the chain of thought steps because as they point out in their article it needs to be able to think about things outside of what it’s normally allowed allowed to say which obviously means you can’t show the content. If you’re trying to come up with worst case scenarios for a situation you actually have to be able to think about those worst case scenarios

  • Lucidlethargy@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    0
    ·
    5 days ago

    I think I’ve used it if this is the latest available, and it’s terrible. It keeps feeding me wrong information, and when you correct it, it says you’re right… But if you ask it again, it again feeds you the wrong information.

  • ulkesh@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    5 days ago

    I just love how people seem to want to avoid using the word lie.

    It’s either misinformation, or alternative facts, or hallucinations.

    Granted, a lie does tend to have intent behind it, so with ChatGPT, it’s probably better to say falsehood, instead. But either way, it’s not fact, it’s not truth, and people, especially schools, should stop using it as a credible source.

    • JustTesting@lemmy.hogru.ch
      link
      fedilink
      English
      arrow-up
      0
      ·
      5 days ago

      There was a recent paper that argues ‘bullshitting’ is the most apt analogy. I.e. telling something to satisfy the other person without caring about the truth content of what you say

    • IndustryStandard@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      5 days ago

      Being wrong is not the same as lying. When LLMs start giving wrong answers on purpose to mislead people we would have a big problem.

      • irreticent@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        5 days ago

        The thought of a maliciously deceptive AGI is terrifying to me. Many, many people will trust it until it’s too late.

  • Etterra@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    5 days ago

    That’s not what reasoning is. Training is understanding what they’re talking about and being able to draw logical conclusions based on what they’ve learned. It’s being able to say, I didn’t know but wait a second and I’ll look it up," and then summing that info up in original language.

    All Open AI did was make it less stupid and slap a new coat of paint on it, hoping nobody asks too many questions.

    • ours@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      5 days ago

      And this is something data scientists have already been doing with existing LLMs.

  • LANIK2000@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    5 days ago

    Dang, OpenAI just pulled an Apple. Do something other people have already done with the same results (but importantly before they made a big fuss about it), claim it’s their innovation, give it a bloated name so people imagine it’s more than it is and produce a graph comparing themselves to themselves, hoping nobody will look at the competition.

    Just like Apple, they have their own selling point, but instead they seem to prefer making up stuff while forgetting why people use em.

    On a side note they also pulled an Elon. Where’s my AI companion that can comment on video in realtime and sing to me??? Ya had it “working” “live” a couple months ago, WHERE IS IT?!?

    • Semperverus@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      3 days ago

      Meanwhile a bald turtle and his AI anime daughter on twitch can do exactly this, and he’s building her at home on nvidia GPUs.

      (Vedal987 and Neuro-sama, if you’re curious)

    • riodoro1@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      5 days ago

      Pulled an Apple?

      I know you hate apple because android is way better but people loved their ipods, iphones, airpods and apple watches. Sure those things were made before but Apple did make them better. So I don’t know what your point is.

      • LANIK2000@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        edit-2
        5 days ago

        Assuming I’m an android fan for pointing out that Apple does shady PR. I literally mention that Apple devices have their selling point. And it isn’t UNMATCHED PERFORMANCE or CUTTING EDGE TECHNOLOGY as their adds seems to suggest. It’s a polished experience and beautiful presentation; that is unmatched. Unlike the hot mess that is android. Android also has its selling points, but this reply is already getting long. Just wanted to point out your pettiness and unwillingness to read more than a sentence.

  • khepri@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    5 days ago

    So they slapped some reinforcement learning on top of their LLM and are claiming that gives it “reasoning capabilities”? Or am I missing something?

    • Evotech@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      5 days ago

      It’s like 3 lms on top of eachother in a trenchcoat, and appau a calculator so it gets math right

  • sinceasdf@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    6 days ago

    Lol Lemmy has the funniest ai haters they drown out any real criticism with stupid strawman nonsense

    • Voroxpete@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      0
      ·
      6 days ago

      This example doesn’t prove what you think it does. It shows pattern detection - something computers are inherently very well suited for - but it doesn’t demonstrate “reasoning” in any meaningful way.

      • FatCrab@lemmy.one
        link
        fedilink
        English
        arrow-up
        0
        ·
        5 days ago

        I think if you can actually define reasoning, your comments (and those like yours) would be much more convincing. I’m just calling yours out because I’ve seen you up and down in this thread repeating it, but it’s a general observed of the vocal critics of the technology overall. Neither intelligence nor reasons (likewise understanding and knowing, for that matter) are easily defined in a way that is more useful than invoking spirits and ghosts. In this case, detecting patterns certainly seems a critical component of what we would consider to be reasoning. I don’t think it’s sufficient, buy it is absolutely necessary.

        • Voroxpete@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          0
          ·
          5 days ago

          While truly defining pretty much any aspect of human intelligence is functionally impossible with our current understanding of the mind, we can create some very usable “good enough” working definitions for these purposes.

          At a basic level, “reasoning” would be the act of drawing logical conclusions from available data. And that’s not what these models do. They mimic reasoning, by mimicking human communication. Humans communicate (and developed a lot of specialized language with which to communicate) the process by which we reason, and so LLMs can basically replicate the appearance of reasoning by replicating the language around it.

          The way you can tell that they’re not actually reasoning is simple; their conclusions often bear no actual connection to the facts. There’s an example I linked elsewhere where the new model is asked to list states with W in their name. It does a bunch of preamble where it spells out very clearly what the requirements and process are; assemble a list of all states, then check each name for the presence of the letter W.

          And then it includes North Dakota, South Dakota, North Carolina and South Carolina in the list.

          Any human being capable of reasoning would absolutely understand that that was wrong, if they were taking the time to carefully and systematically work through the problem in that way. The AI does not, because all this apparent “thinking” is a smoke show. They’re machines built to give the appearance of intelligence, nothing more.

          When real AGI, or even something approaching it, actually becomes a thing, I will be extremely excited. But this is just snake oil being sold as medicine. You’re not required to buy into their bullshit just to prove you’re not a technophobe.

      • kromem@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        6 days ago

        You should really look at the full CoT traces on the demos.

        I think you think you know more than you actually know.

          • kromem@lemmy.world
            link
            fedilink
            English
            arrow-up
            0
            ·
            5 days ago

            Actually, they are hiding the full CoT sequence outside of the demos.

            What you are seeing there is a summary, but because the actual process is hidden it’s not possible to see what actually transpired.

            People are very not happy about this aspect of the situation.

            It also means that model context (which in research has been shown to be much more influential than previously thought) is now in part hidden with exclusive access and control by OAI.

            There’s a lot of things to be focused on in that image, and “hur dur the stochastic model can’t count letters in this cherry picked example” is the least among them.

          • kromem@lemmy.world
            link
            fedilink
            English
            arrow-up
            0
            ·
            edit-2
            6 days ago

            Yep:

            https://openai.com/index/learning-to-reason-with-llms/

            First interactive section. Make sure to click “show chain of thought.”

            The cipher one is particularly interesting, as it’s intentionally difficult for the model.

            The tokenizer is famously bad at two letter counts, which is why previous models can’t count the number of rs in strawberry.

            So the cipher depends on two letter pairs, and you can see how it screws up the tokenization around the xx at the end of the last word, and gradually corrects course.

            Will help clarify how it’s going about solving something like the example I posted earlier behind the scenes.

  • Buffalox@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    6 days ago

    trained to answer more complex questions, faster than a human can.

    I can answer math questions really really fast. Not correct though, but like REALLY fast!

    • hedgehog@ttrpg.network
      link
      fedilink
      English
      arrow-up
      0
      ·
      6 days ago

      I’m more concerned about them using the word “sapient.” My dog is sentient; it’s not a high bar to clear.

    • Echo Dot@feddit.uk
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      6 days ago

      Is that even the goal? Do we want an AI that’s self aware because I thought that basically the whole point was to have an intelligence without a mind.

      We don’t really want sapient AI because if we do that then we have to feel bad about putting it in robots and making them do boring jobs. Don’t we basically want guildless servants, isn’t that the point?

      • Daemon Silverstein@thelemmy.club
        link
        fedilink
        English
        arrow-up
        0
        ·
        6 days ago

        It seems utopia/dystopia, but some things get discovered/invented by accident. The more companies and organizations (and even individuals) fiddle with AI improvement, the more the “odds” of a sentient AI (AGI) being accidentally created increases. Let’s not forget that there are lots of companies, organizations and individuals (yeah, individuals, people outside organizations but with lots of computing power and knowledge) simultaneously developing and training AIs. Well, maybe I’m wrong and just very optimistic for such thing to appear out of nowhere.

      • kent_eh@lemmy.ca
        link
        fedilink
        English
        arrow-up
        0
        ·
        6 days ago

        What we want doesn’t have any impact on what our corporate overlords decide to inflict on us.

        • Echo Dot@feddit.uk
          link
          fedilink
          English
          arrow-up
          0
          ·
          6 days ago

          They don’t want sapient AI either, why would they?

          No one is trying for a self-aware artificial intelligence.

      • SynopsisTantilize@lemm.ee
        link
        fedilink
        English
        arrow-up
        0
        ·
        6 days ago

        For the servants bots, yes no sentience. For my in house AI assistant robot buddy/butler/nanny/driver - also yes no sentience.

    • nave@lemmy.caOP
      link
      fedilink
      English
      arrow-up
      0
      ·
      7 days ago

      At the same time, o1 is not as capable as GPT-4o in a lot of areas. It doesn’t do as well on factual knowledge about the world. It also doesn’t have the ability to browse the web or process files and images. Still, the company believes it represents a brand-new class of capabilities. It was named o1 to indicate “resetting the counter back to 1.”

      I think it’s more of a proof of concept then a fully functioning model at this point.

        • andyburke@fedia.io
          link
          fedilink
          arrow-up
          0
          ·
          7 days ago

          Facts. A “reasoning AI” has problems with … lemme check this again … facts?

          Find the comment about psychics, it’s exactly the situation we are currently in.

  • Nurse_Robot@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    7 days ago

    I’m getting so tired of the pessimists who are against AI. Granted, I can reflect and see my own similar attitude towards Trump: no matter what, I would never vote for him considering his history and who he is as a person. But treating the next generation of technology feels different than that to me; AI is the future, it’s the next revolution. Sure, there are several real issues to criticize and question (copyright, compensation, hallucination come to mind) but instead shit here on Lemmy just gets downvoted to hell with no explanation. I know this comment will get downvoted, but I just wish we could have a discussion about the future without shutting down every practical comment wanting to talk about it.

    • Voroxpete@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      0
      ·
      6 days ago

      More and more advanced tools for automation are an important part of creating a post-scarcity future. If we can combine that with tearing down our current economic system - which inherently requires and thus has to manufacture scarcity - we can uplift our species in ways we can currently only imagine.

      But this ain’t it bud. If I ask you for water and you hand me a glass of warm piss, I’m not “against drinking water” for refusing to gulp it down.

      This isn’t AI. It isn’t - meaningfully and usefully - any form of automation at all. A bunch of conmen slapped the letters “AI” on the side of their bottle of piss and you’re drinking it down like it’s grandma’s peach tea.

      The people calling out the fundamental flaws with these products aren’t doing so because we hate the entire concept of automation, any more than someone exposing a snake-oil salesman hates medicine. What we hate is being lied to. The current state of this technology is bullshit and hype. It is not fit for human consumption (other than recreationally) and the money being pumped into it could be put to far better uses. OpenAI may have lofty goals, but they have utterly failed at achieving them, and right now any true desire to create AGI has been totally subsumed by the need to keep pumping out slightly better looking versions of the same polished turd in order to convince investors to keep paying for their staggeringly high hosting costs.

    • TommySoda@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      7 days ago

      I’m kinda in the same boat but on the other side. I always try to argue with people about this. It gets me a lot of flak on pro AI posts but that won’t stop me. I usually get very aggressive replies and sometimes some fucked up dm’s too.

      I’m against it because we are already seeing the consequences of this technology and it’s only getting worse. By the time laws catch up it’s gonna be too late and the damage will be done. For some technologies that’s not always the worst. But we already saw how long it took for anyone to do anything about the Internet when it came out, and we are still trying to this day. This shit is growing so fast we will all feel the whiplash. Sites like Facebook are getting absolutely flooded with so much AI that they are becoming almost unusable. And that’s before we even get into the shady shit people use AI for like making porn of people they know with the click of a button. I recently read an article about how bad deepfake porn is in South Korea (found the article. https://www.nytimes.com/2024/09/12/world/asia/south-korea-deepfake-videos.html). And in places like the US, where a lot of these companies are based, they are so slow to do anything about a problem it’s going to be too late by the time they get to it.

      But besides all the awful things happening because of AI, I do have one personal gripe with the whole ordeal. Why are we so quick to replace the things we enjoy with AI? When I get home from work I like to make music and practice pixel art (I’m not very good at either yet). I’d much rather have AI replace my job than my hobbies. I’m down for things that are useful, but too much of this just gives me a bad gut feeling. Like their trying to replace people and not their jobs.

      This may be the future. But it sounds like a pretty dystopian future to me. You already can’t believe everything you see on the Internet and this will only make it worse.