• SeaJ@lemm.ee
    link
    fedilink
    English
    arrow-up
    33
    arrow-down
    2
    ·
    edit-2
    6 months ago

    What sort of crack are they on that they think unauthorized use of an entire work for commercial gain is fair use? I think copywrite laws are ridiculous but that is a pretty low bar they are trying to set.

    They should have to pay for their usage or retrain the model without it. Going to guess they would prefer to pay up.

    • FaceDeer@fedia.io
      link
      fedilink
      arrow-up
      9
      arrow-down
      28
      ·
      edit-2
      6 months ago

      Training an AI does not involve copying anything so why would you think that fair use is even a factor here? It’s outside of copyright altogether. You can’t copyright concepts.

      Downloading pirated books to your computer does involve copyright violation, sure, but it’s a violation by the uploader. And look at what community we’re in, are we going to get all high and mighty about that?

      • Uriel238 [all pronouns]@lemmy.blahaj.zone
        link
        fedilink
        English
        arrow-up
        11
        arrow-down
        2
        ·
        6 months ago

        Actually it does. It involves making use of a copy that is not the original. Fair use is about experiencing media for sake of dialog (criticism or parody) or for edification. That means someone is reading the book or watching the movie, or using it for transformative art or science.

        AI training should qualify for fair use.

      • pelespirit@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        8
        ·
        6 months ago

        Do you think the corporations like my art and is it fair? Apparently it is if I run it through AI is what you’re saying.

        https://imgur.com/a/these-are-new-niki-mice-drawings-phone-company-chainsaws-merms-donut-logos-burger-mc-winfruit-computers-republunch-political-party-logos-Rhgi0OC

        Why do you think that the AI companies want to hoover up everyone’s art? Because it’s valuable or they wouldn’t take the risk of all of this backlash.

      • best_username_ever@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        3
        arrow-down
        1
        ·
        6 months ago

        a violation by the uploader

        Most countries disagree with you. The standard is to sue both people, the one who sends and the one who receives.

      • Natanael@slrpnk.net
        link
        fedilink
        English
        arrow-up
        1
        ·
        edit-2
        6 months ago

        Remember when media companies tried to sue switch manufacturers because their routers held copies of packets in RAM and argued they needed licensing for that?

        https://www.eff.org/deeplinks/2006/06/yes-slashdotters-sira-really-bad

        Training an AI can end up leaving copies of copyrightable segments of the originals, look up sample recover attacks. If it had worked as advertised then it would be transformative derivative works with fair use protection, but in reality it often doesn’t work that way

        See also

        https://curia.europa.eu/juris/liste.jsf?nat=or&mat=or&pcs=Oor&jur=C%2CT%2CF&for=&jge=&dates=&language=en&pro=&cit=none%252CC%252CCJ%252CR%252C2008E%252C%252C%252C%252C%252C%252C%252C%252C%252C%252Ctrue%252Cfalse%252Cfalse&oqp=&td=%3BALL&avg=&lgrec=en&parties=Football%2BAssociation%2BPremier%2BLeague&lg=&page=1&cid=10711513

        • FaceDeer@fedia.io
          link
          fedilink
          arrow-up
          2
          arrow-down
          1
          ·
          edit-2
          6 months ago

          Remember when piracy communities thought that the media companies were wrong to sue switch manufacturers because of that?

          It baffles me that there’s such an anti-AI sentiment going around that it would cause even folks here to go “you know, maybe those litigious copyright cartels had the right idea after all.”

          We should be cheering that we’ve got Meta on the side of fair use for once.

          look up sample recover attacks.

          Look up “overfitting.” It’s a flaw in generative AI training that modern AI trainers have done a great deal to resolve, and even in the cases of overfitting it’s not all of the training data that gets “memorized.” Only the stuff that got hammered into the AI thousands of times in error.

          • Natanael@slrpnk.net
            link
            fedilink
            English
            arrow-up
            3
            arrow-down
            1
            ·
            edit-2
            6 months ago

            Yes, but should big companies with business models designed to be exploitative be allowed to act hypocritically?

            My problem isn’t with ML as such, or with learning over such large sets of works, etc, but these companies are designing their services specifically to push the people who’s works they rely on out of work.

            The irony of overfitting is that both having numerous copies of common works is a problem AND removing the duplicates would be a problem. They need an understanding of what’s representative for language, etc, but the training algorithms can’t learn that on their own and it’s not feasible go have humans teach it that and also the training algorithm can’t effectively detect duplicates and “tune down” their influence to stop replicating them exactly. Also, trying to do that latter thing algorithmically will ALSO break things as it would break its understanding of stuff like standard legalese and boilerplate language, etc.

            The current generation of generative ML doesn’t do what it says on the box, AND the companies running them deserve to get screwed over.

            And yes I understand the risk of screwing up fair use, which is why my suggestion is not to hinder learning, but to require the companies to track copyright status of samples and inform ends users of licensing status when the system detects a sample is substantially replicated in the output. This will not hurt anybody training on public domain or fairly licensed works, nor hurt anybody who tracks authorship when crawling for samples, and will also not hurt anybody who has designed their ML system to be sufficiently transformative that it never replicates copyrighted samples. It just hurts exploitative companies.

            • FaceDeer@fedia.io
              link
              fedilink
              arrow-up
              2
              ·
              6 months ago

              There actually isn’t a downside to de-duplicating data sets, overfitting is simply a flaw. Generative models aren’t supposed to “memorize” stuff - if you really want a copy of an existing picture there are far easier and more reliable ways to accomplish that than giant GPU server farms. These models don’t derive any benefit from drilling on the same subset of data over and over. It makes them less creative.

              I want to normalize the notion that copyright isn’t an all-powerful fundamental law of physics like so many people seem to assume these days, and if I can get big companies like Meta to throw their resources behind me in that argument then all the better.

              • Natanael@slrpnk.net
                link
                fedilink
                English
                arrow-up
                1
                ·
                edit-2
                6 months ago

                Humans learn a lot through repetition, no reason to believe that LLMs wouldn’t benefit from reinforcement of higher quality information. Especially because seeing the same information in different contexts helps mapping the links between the different contexts and helps dispel incorrect assumptions. But like I said, the only viable method they have for this kind of emphasis at scale is incidental replication of more popular works in its samples. And when something is duplicated too much it overfits instead.

                They need to fundamentally change big parts of how learning happens and how the algorithm learns to fix this conflict. In particular it will need a lot more “introspective” training stages to refine what it has learned, and pretty much nobody does anything even slightly similar on large models because they don’t know how, and it would be insanely expensive anyway.

                • FaceDeer@fedia.io
                  link
                  fedilink
                  arrow-up
                  1
                  ·
                  6 months ago

                  Especially because seeing the same information in different contexts helps mapping the links between the different contexts and helps dispel incorrect assumptions.

                  Yes, but this is exactly the point of deduplication - you don’t want identical inputs, you want variety. If you want the AI to understand the concept of cats you don’t keep showing it the same picture of a cat over and over, all that tells it is that you want exactly that picture. You show it a whole bunch of different pictures whose only commonality is that there’s a cat in it, and then the AI can figure out what “cat” means.

                  They need to fundamentally change big parts of how learning happens and how the algorithm learns to fix this conflict.

                  Why do you think this?