• fckreddit@lemmy.ml
    link
    fedilink
    English
    arrow-up
    13
    arrow-down
    1
    ·
    1 year ago

    Didn’t OpenAI stop working on a AI detector because they were not reliable?

    • kernelle@0d.gs
      link
      fedilink
      English
      arrow-up
      6
      ·
      1 year ago

      Code made to mimic human vocabulary mimics human vocabulary and everyone is shocked? When is this going to stop being news lmao

  • Queen HawlSera@lemm.ee
    link
    fedilink
    English
    arrow-up
    4
    ·
    1 year ago

    This reminds me of a science fiction setting in which the human race basically halted all media production because an AI was created that scanned pieces of literature to see if any passages themes, characters, or plot points were based on anything from a pre-existing work owned by a major company. Which basically everything is going to be because absolutely nothing is ever created in a vacuum.

    So no one could write anymore without facing a major lawsuit and having an AI basically tell the judge that you’re guilty as fuck

  • jray4559@lemmy.sdf.org
    link
    fedilink
    English
    arrow-up
    2
    ·
    1 year ago

    How the fuck can people use these BS detectors when it has been proven probably a thousand times that it can’t differentiate anything?

    The Constitution, Bible, probably Mein Kampf and Uncle Tom too would all be “made by AI” if we treated these programs as gospel.

    And these text AI’s are never going away, the instant one actually can hold a character for an entire book-length, romance novels are in real trouble at bare minimum, soon almost everyone will be having to compete with a thousand fake authors just to make it somewhere. It’ll be interesting to see…