• myliltoehurts@lemm.ee
    link
    fedilink
    English
    arrow-up
    125
    ·
    6 months ago

    So they filled reddit with bot generated content, and now they’re selling back the same stuff likely to the company who generated most of it.

    At what point can we call an AI inbred?

            • BakerBagel@midwest.social
              link
              fedilink
              English
              arrow-up
              10
              ·
              6 months ago

              But there were still bots making shit up back then. r/SubredditSimulator was pretty popular for awhile, and repost and astroturfing bots were a problem form decades on Reddit.

              • FaceDeer@fedia.io
                link
                fedilink
                arrow-up
                1
                ·
                6 months ago

                Existing AIs such as ChatGPT were trained in part on that data so obviously they’ve got ways to make it work. They filtered out some stuff, for example - the “glitch tokens” such as solidgoldmagikarp were evidence of that.

        • mint_tamas@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          arrow-down
          1
          ·
          6 months ago

          That paper is yet to be peer reviewed or released. I think you are jumping into conclusion with that statement. How much can you dilute the data until it breaks again?

          • barsoap@lemm.ee
            link
            fedilink
            English
            arrow-up
            2
            ·
            edit-2
            6 months ago

            That paper is yet to be peer reviewed or released.

            Never doing either (release as in submit to journal) isn’t uncommon in maths, physics, and CS. Not to say that it won’t be released but it’s not a proper standard to measure papers by.

            I think you are jumping into conclusion with that statement. How much can you dilute the data until it breaks again?

            Quoth:

            If each linear model is instead fit to the generate targets of all the preceding linear models i.e. data accumulate, then the test squared error has a finite upper bound, independent of the number of iterations. This suggests that data accumulation might be a robust solution for mitigating model collapse.

            Emphasis on “finite upper bound, independent of the number of iterations” by doing nothing more than keeping the non-synthetic data around each time you ingest new synthetic data. This is an empirical study so of course it’s not proof you’ll have to wait for theorists to have their turn for that one, but it’s darn convincing and should henceforth be the null hypothesis.

            Btw did you know that noone ever proved (or at least hadn’t last I checked) that reversing, determinising, reversing, and determinising again a DFA minimises it? Not proven yet widely accepted as true, crazy, isn’t it? But, wait, no, people actually proved it on a napkin. It’s not interesting enough to do a paper about.

    • restingboredface@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      15
      ·
      6 months ago

      I wonder if Open AI or any of the other firms have thought to put in any kind of stipulations about monitoring and moderating reddit content to reduce ai generated posts and reduce risk of model collapse.

      Anybody who’s looked at reddit in the past 2 years especially has seen the impact of ai pretty clearly. If I was running open ai I wouldn’t want that crap contaminating my models.