A bipartisan group of senators introduced a new bill to make it easier to authenticate and detect artificial intelligence-generated content and protect journalists and artists from having their work gobbled up by AI models without their permission.

The Content Origin Protection and Integrity from Edited and Deepfaked Media Act (COPIED Act) would direct the National Institute of Standards and Technology (NIST) to create standards and guidelines that help prove the origin of content and detect synthetic content, like through watermarking. It also directs the agency to create security measures to prevent tampering and requires AI tools for creative or journalistic content to let users attach information about their origin and prohibit that information from being removed. Under the bill, such content also could not be used to train AI models.

Content owners, including broadcasters, artists, and newspapers, could sue companies they believe used their materials without permission or tampered with authentication markers. State attorneys general and the Federal Trade Commission could also enforce the bill, which its backers say prohibits anyone from “removing, disabling, or tampering with content provenance information” outside of an exception for some security research purposes.

(A copy of the bill is in he article, here is the important part imo:

Prohibits the use of “covered content” (digital representations of copyrighted works) with content provenance to either train an AI- /algorithm-based system or create synthetic content without the express, informed consent and adherence to the terms of use of such content, including compensation)

  • General_Effort@lemmy.world
    link
    fedilink
    English
    arrow-up
    82
    arrow-down
    3
    ·
    4 months ago

    This is a brutally dystopian law. Forget the AI angle and turn on your brain.

    Any information will get a label saying who owns it and what can be done with it. Tampering with these labels becomes a crime. This is the infrastructure for the complete control of the flow of all information.

    • msgraves@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      5
      ·
      4 months ago

      Exactly, this isn’t about any sort of AI, this is the old playbook of trying to digitally track images, just with the current label slapped on. Regardless of your opinion on AI, this is a terrible way to solve this.

    • Throw_away_migrator@lemmy.world
      link
      fedilink
      English
      arrow-up
      5
      arrow-down
      3
      ·
      4 months ago

      Maybe I’m missing something, but my read is that it creates a mechanism/standard for labeling content. If content is labeled under this standard, it is illegal to remove the labeling or use it in a way the labeling prohibits. But I don’t see a requirement to label content with this mechanism.

      If that’s the case I don’t see a problem. Now, if all the content is required to be labeled, then yes it’s a privacy nightmare. But my interpretation was that this is a mechanism to prevent AI companies from gobbling up content without consent and saying, “What? There’s nothing saying I couldn’t use it.”

      • ObliviousEnlightenment@lemmy.world
        link
        fedilink
        English
        arrow-up
        14
        ·
        4 months ago

        Most everyone from corporations to tumblr artists will be opting into that. While it doesnt guarantee an information dystopia, it does enable it

        I download images from the internet and remove watermarks to edit them in youtube videos as visual aid. I add a credit to the description because Im not a cunt, I just do it to make the video look better. I dont monetize content. Utterly and totally harmless, and would be illegal with such a label

      • General_Effort@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        4 months ago

        It’s rather more than that. In the very least, it is a DRM system, meant to curtail fair use. We’re not just talking about AI training. The AutoTLDR bot here would also be affected. Manually copy/pasting articles while removing the metadata becomes illegal. Platforms have a legal duty to stop copyright infringement. In practice, they will probably have to use the metadata label to stop reposts and re-uploads of images and articles.

        This bill was obviously written by lobbyists for major corpos like Adobe. This wants to make the C2PA standard legally binding. They have been working on this for the last couple years. OpenAI already uses it.

        In the very least, this bill will entrench the monopolies of the corporations behind it; at the expense of the rights of ordinary people.


        I don’t think it’ll stop there. Look at age verification laws in various red states and around the world. Once you have this system in place, it would be obvious to demand mandatory content warnings in the metadata. We’re not just talking about erotic images but also about articles on LGBTQ matters.

        More control over the flow of information is the way we are going anyway. From age-verification to copyright enforcement, it’s all about making sure that only the right people can access certain information. Copyright used to be about what businesses can print a book. Now it’s about what you can do at home with your own computer. We’re moving in this dystopian direction, anyway, and this bill is a big step.


        The bill talks about “provenance”. The ambition is literally a system to track where information comes from and how it is processed. If this was merely DRM, that would be bad enough. But this is an intentionally dystopian overreach.

        EG you have cameras that automatically add the tracking data to all photos and then photoshop adds data about all post-processing. Obviously, this can’t be secure. (NB: This is real and not hypothetical. More)

        The thing is, a door lock isn’t secure either. It takes seconds to break down a door, or to break a window instead. The secret ingredient is surveillance and punishment. Someone hears or sees something and calls the police. To make the ambition work, you need something at the hardware level in any device that can process and store data. You also need a lot of surveillance to crack down on people who deal in illegal hardware.

        I’m afraid, this is not as crazy as it sounds. You may have heard about the recent “Chat Control” debate in the EU. That is a proposal, with a lot of support, that would let police scan the files on a phone to look for “child porn” (mind that this includes sexy selfies that 17-year-olds exchange with their friends). Mandatory watermarking, that let the government trace a photo to the camera and its owner, is mild by comparison.


        The bill wants government agencies like DARPA to help in the development of better tracking systems. Nice for the corpos that they get some of that tax money. But it also creates a dynamic in the government that will make it much more likely that we continue on a dystopian path. For agencies, funding will be on the line; plus there are egos. Meanwhile, you still have the content industry lobbying for more control over its intellectual “property”.

    • ArchRecord@lemm.ee
      link
      fedilink
      English
      arrow-up
      2
      ·
      4 months ago

      It’s like applying DRM law to all media ever. And we know the problems with DRM already, as exemplified 2 decades ago by Cory Doctorow in his talk at Microsoft to convince them not to endorse and use it.