Wondering if Modern LLMs like GPT4, Claude Sonnet and llama 3 are closer to human intelligence or next word predictor. Also not sure if this graph is right way to visualize it.

  • Zexks@lemmy.world
    link
    fedilink
    arrow-up
    17
    arrow-down
    11
    ·
    12 hours ago

    Lemmy is full of AI luddites. You’ll not get a decent answer here. As for the other claims. They are not just next token generators anymore than you are when speaking.

    https://eight2late.wordpress.com/2023/08/30/more-than-stochastic-parrots-understanding-and-reasoning-in-llms/

    There’s literally dozens of these white papers that everyone on here chooses to ignore. Am even better point being none of these people will ever be able to give you an objective measure from which to distinguish themselves from any existing LLM. They’ll never be able to give you points of measure that would separate them from parrots or ants but would exclude humans and not LLMs other than “it’s not human or biological” which is just fearful weak thought.

    • gravitas_deficiency@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      4
      ·
      edit-2
      2 hours ago

      Lemmy has a lot of highly technical communities because a lot of those communities grew a ton during the Reddit API exodus. I’m one of those users.

      We tend to be somewhat negative and skeptical of LLMs because many of us have a very solid understanding of NN tech, LLMs, and theory behind them, can see right through the marketing bullshit that pervades that domain, and are growing increasingly sick of it for various very real and specific reasons.

      We’re not just blowing smoke out of our asses. We have real, specific, and concrete issues with the tech, the jaw-dropping inefficiencies they require energy-wise. what it’s being billed as, and how it’s being deployed.

    • chobeat@lemmy.ml
      link
      fedilink
      arrow-up
      7
      arrow-down
      1
      ·
      5 hours ago

      you use “luddite” as if it’s an insult. History proved luddites were right in their demands and they were fighting the good fight.

    • vrighter@discuss.tchncs.de
      link
      fedilink
      arrow-up
      8
      ·
      8 hours ago

      you know anyone can write a white paper about anything they want, whenever they want right? A white paper is not authoritative in the slightest.

    • jacksilver@lemmy.world
      link
      fedilink
      arrow-up
      10
      ·
      9 hours ago

      Here’s an easy way we’re different, we can learn new things. LLMs are static models, it’s why they mention the cut off dates for learning for OpenAI models.

      Another is that LLMs can’t do math. Deep Learning models are limited to their input domain. When asking an LLM to do math outside of its training data, it’s almost guaranteed to fail.

      Yes, they are very impressive models, but they’re a long way from AGI.

      • DavidDoesLemmy
        link
        fedilink
        arrow-up
        2
        arrow-down
        5
        ·
        6 hours ago

        I know lots of humans who can’t do maths. At least I think they’re human. Maybe there LLMs, by your definition.