• Prunebutt@slrpnk.net
    link
    fedilink
    English
    arrow-up
    59
    arrow-down
    4
    ·
    edit-2
    5 months ago

    Whenever any advance is made in AI, AI critics redefine AI so its not achieved yet according to their definition.

    That stems from the fact that AI is an ill-defined term that has no actual meaning. Before Google maps became popular, any route finding algorithm utilizing A* was considered “AI”.

    And the second comment about LLMs being parrots arises from a misunderstanding of how LLMs work.

    Bullshit. These people know exactly how LLMs work.

    LLMs reproduce the form of language without any meaning being transmitted. That’s called parroting.

      • Prunebutt@slrpnk.net
        link
        fedilink
        English
        arrow-up
        22
        arrow-down
        7
        ·
        edit-2
        5 months ago

        AI is a marketing buzzword. When someone claims that so-called “AGI” is close, they’re either doing marketing or falling for marketing.

        Since you didn°t address the “parroting” part, I’m assuming that you retract your point.

    • lunarul@lemmy.world
      link
      fedilink
      English
      arrow-up
      5
      arrow-down
      26
      ·
      edit-2
      5 months ago

      LLMs reproduce the form of language without any meaning being transmitted. That’s called parroting.

      Even if (and that’s a big if) an AGI is going to be achieved at some point, there will be people calling it parroting by that definition. That’s the Chinese room argument.

        • lunarul@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          6
          ·
          5 months ago

          Me? How can I move goalposts in a single sentence? We’ve had no previous conversation… And I’m not agreeing with the previous poster either…

          • Prunebutt@slrpnk.net
            link
            fedilink
            English
            arrow-up
            7
            ·
            5 months ago

            By entering the discussion, you also engaged in the previops context. The discussion uas about LLMs being parrots.

            • lunarul@lemmy.world
              link
              fedilink
              English
              arrow-up
              1
              arrow-down
              2
              ·
              5 months ago

              And the argument was if there’s meaning behind what they generate. That argument applies to AGIs too. It’s a deeply debated philosophical question. What is meaning? Is our own thought pattern deterministic, and if it is, how do we know there’s any meaning behind our own actions?

              • Prunebutt@slrpnk.net
                link
                fedilink
                English
                arrow-up
                3
                ·
                5 months ago

                The burden of proof lies on the people making the claims about intelligence. “AI” pundits have supplied nothing but marketing-hype.