I, for one…

  • rottingleaf@lemmy.zip
    link
    fedilink
    English
    arrow-up
    2
    ·
    3 months ago

    I don’t see a fundamental reason

    Spend a few evenings on learning ML, then read about internals of some of the bigger models. Even chatGPT. It’s not too hard.

    That’ll give you some reasons.

      • rottingleaf@lemmy.zip
        link
        fedilink
        English
        arrow-up
        2
        ·
        edit-2
        3 months ago

        OK.

        It’s just a family of extrapolators, enhanced with various tricks, similar to those used in variable-length codes.

        It can’t reason, build logical structures, it doesn’t have abstract thinking or any thinking.

        It may be used as one of the building blocks for real AGI like 100 years from now, but the existing thing is not that, and there’s no way it’ll become that via small incremental changes.

        • LesserAbe@lemmy.world
          link
          fedilink
          English
          arrow-up
          3
          ·
          3 months ago

          Thanks! I don’t know what you mean by your first paragraph.

          You’re right that it’s not near being an AGI. But it doesn’t need to be to be used in a robot form and perform some useful tasks.

          Right now I can ask chatgpt to take a block of code in Ruby and output the equivalent in Python, and it will do it, and for the most part it’s correct.

          I could envision telling an AI robot to sort this pile of parts by type, or pick up all the sticks in the yard, etc. I think we could make something like that now without any significant technological breakthroughs. It might get stuff wrong sometimes, but I envision it as having an intern, not creating a new god. Of course these companies may promise much more in their marketing.