We’ve been doing it for genetic modification for ages. If it’s possible to stop people from making human-chihuahua baby hybrids en masse, why is it impossible to stop people from culturally devaluing art en masse?

I don’t think it’s reactionary to have a cultural concern like this, either. Especially when the concern boils down to hyper-commodification. I’m not concerned about some abstract “rot” of society, but rather the commodification of art itself.

  • buckykat [none/use name]@hexbear.net
    link
    fedilink
    English
    arrow-up
    8
    ·
    11 months ago

    A) there’s not really much of a profit motive to make human-chihiahua baby hybrids

    B) you can’t just make human-chihiahua baby hybrids on a big pile of commodity gaming hardware

    • WithoutFurtherBelay [none/use name]@hexbear.netOP
      link
      fedilink
      English
      arrow-up
      1
      ·
      edit-2
      11 months ago

      well, companies would FIND a profit motive if they were allowed to. remove the ability to use AI willy-nilly and you’d remove the profit motive. also you can grow weed with some soil pots and that didn’t stop it being illegal from disallowing it from being mass corporate

          • dat_math [they/them]@hexbear.net
            link
            fedilink
            English
            arrow-up
            6
            ·
            11 months ago

            make it so

            Right, so I’m asking how to do that mechanically speaking. We can’t build useful general purpose computers that fundamentally can’t run neural networks and other ML models, so how would enforcement operate? We don’t have an oracle that can tell us how much human effort went into modifying an AI derived work, let alone merely classify if a work was produced by generative ML with high accuracy, so I think trying to repair modern notions of IP law to account for this isn’t a dead end as much as an arms race (and kind of an interesting fractal when you think about how most generative ML models are trained by adjusting their parameters to maximize the likelihood that they fool a so-called discriminator model)

            • WithoutFurtherBelay [none/use name]@hexbear.netOP
              link
              fedilink
              English
              arrow-up
              2
              ·
              edit-2
              11 months ago

              Heavily, heavily fine and possibly jail people who break that law? Y’know, the stuff we do when we find someone with 2 ounces of weed on them but applying it to companies instead of random innocent black people?

              Most of the law is subjective anyways, so just compare the unmodified version and the modified one and if the modified one is barely recognizable, call it a day

            • drhead [he/him]@hexbear.net
              link
              fedilink
              English
              arrow-up
              2
              ·
              11 months ago

              and kind of an interesting fractal when you think about how most generative ML models are trained by adjusting their parameters to maximize the likelihood that they fool a so-called discriminator model

              This isn’t as common anymore, most modern image models are diffusion models which do not rely on a discriminator but which transform noise into an image using an iterative refinement process. GANs are annoying to train and don’t work quite as well for image synthesis but they are still somewhat used as components (like as an encoder to transform an image into a latent image so it is easier to process and decode it back at the end, e.g. Stable Diffusion’s VAE) or as extra models for other processing (like ESRGAN and its derivatives which is fairly old at this point, often used for image upscaling or sometimes for removing compression noise). The main force that pushes AI model output to be less detectable is that AI models are built to represent the distribution of the dataset they are trained on, and over time better designed models and training regimes will fit that distribution better, which by definition includes outputs becoming more indistinguishable from the dataset.

              As far as I have seen, the AI classifier arms race is already very far behind on the classifier side. I have seen far more cases of things like ZeroGPT returning false positives than I have seen true positives that don’t include “As a large language model…”. I have seen plenty of instances of photos of the current conflict in Israel where people fed a photo to an AI classifier site and confidently said it was 97% chance of being AI when visually the photo doesn’t even show any signs of being fake, and it’s more likely that the photo is just a real photo that doesn’t actually show what is claimed (which shows that people need to learn more about propaganda in general – the base unit of propaganda is not lies, it is emphasis, because of this you need to be more wary of context than whether information is factual in most cases). The fact that people blindly trust AI classifiers is arguably somewhat more damaging right now than generative AI models.

              • dat_math [they/them]@hexbear.net
                link
                fedilink
                English
                arrow-up
                1
                ·
                11 months ago

                oh huh I guess it has been ages (in research time) since GANs were the hot new algorithm.

                The fact that people blindly trust AI classifiers is arguably somewhat more damaging right now than generative AI models.

                Absolutely agree! I’m dreading the day I have to tell a doctor that I want a proper examination that they’re saying is unnecessary because an ML model decided I’m healthy despite symptoms