ChatGPT is full of sensitive private information and spits out verbatim text from CNN, Goodreads, WordPress blogs, fandom wikis, Terms of Service agreements, Stack Overflow source code, Wikipedia pages, news blogs, random internet comments, and much more.

  • JohnEdwa@sopuli.xyz
    link
    fedilink
    English
    arrow-up
    4
    ·
    edit-2
    7 months ago

    Because language learning models don’t actually understand what is truth or what is real, they just know how humans usually string words together so they can conjure plausible readable text. If your training data contains falsehoods, it will learn to write them.

    To get something that would benefit from knowing both sides, we’d need to first create a proper agi, artificial general intelligence, with the ability to actually think.

    • Immersive_Matthew@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      1
      ·
      7 months ago

      I sort of agree. They do have some level of right and wrong already, it is just very spotty and inconsistent in the current models. As you said we need AGI level AI to really address the shortcomings which sounds like it is just a matter of time. Maybe sooner than we are all expecting.