Over just a few months, ChatGPT went from correctly answering a simple math problem 98% of the time to just 2%, study finds. Researchers found wild fluctuations—called drift—in the technology’s abi…::ChatGPT went from answering a simple math correctly 98% of the time to just 2%, over the course of a few months.

  • Victoria@lemmy.blahaj.zone
    link
    fedilink
    English
    arrow-up
    50
    arrow-down
    1
    ·
    1 year ago

    It was initially presented as the all-problem-solver, mainly by the media. And tbf, it was decently competent in certain fields.

    • MeanEYE@lemmy.world
      link
      fedilink
      English
      arrow-up
      12
      arrow-down
      1
      ·
      1 year ago

      Problem was it was presented as problem solved which it never was, it was problem solution presenter. It can’t come up with a solution, only come up with something that looks like a solution based on what input data had. Ask it to invert sort something and goes nuts.

    • Lukecis@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      1
      ·
      1 year ago

      Once AGI is achieved and subsequently Sentient-super intelligent ai- I cant imagine them not being such a thing, however I’d be surprised if a super intelligent sentient ai doesn’t decide humanity needs to go extinct for its own best self interests.