It’s all made from our data, anyway, so it should be ours to use as we want

  • m-p{3}@lemmy.ca
    link
    fedilink
    English
    arrow-up
    41
    arrow-down
    4
    ·
    18 hours ago

    It could also contain non-public domain data, and you can’t declare someone else’s intellectual property as public domain just like that, otherwise a malicious actor could just train a model with a bunch of misappropriated data, get caught (intentionally or not) and then force all that data into public domain.

    Laws are never simple.

    • drkt@scribe.disroot.org
      link
      fedilink
      English
      arrow-up
      18
      arrow-down
      2
      ·
      18 hours ago

      Forcing a bunch of neural weights into the public domain doesn’t make the data they were trained on also public domain, in fact it doesn’t even reveal what they were trained on.

      • deegeese@sopuli.xyz
        link
        fedilink
        English
        arrow-up
        9
        arrow-down
        17
        ·
        18 hours ago

        LOL no. The weights encode the training data and it’s trivially easy to make AI generators spit out bits of their training data.

        • stephen01king@lemmy.zip
          link
          fedilink
          English
          arrow-up
          4
          ·
          17 hours ago

          How easy are we talking about here? Also, making the model public domain doesn’t mean making the output public domain. The output of an LLM should still abide by copyright laws, as they should be.

    • grue@lemmy.world
      link
      fedilink
      English
      arrow-up
      22
      arrow-down
      10
      ·
      18 hours ago

      So what you’re saying is that there’s no way to make it legal and it simply needs to be deleted entirely.

      I agree.

      • FaceDeer@fedia.io
        link
        fedilink
        arrow-up
        7
        arrow-down
        3
        ·
        16 hours ago

        There’s no need to “make it legal”, things are legal by default until a law is passed to make them illegal. Or a court precedent is set that establishes that an existing law applies to the new thing under discussion.

        Training an AI doesn’t involve copying the training data, the AI model doesn’t literally “contain” the stuff it’s trained on. So it’s not likely that existing copyright law makes it illegal to do without permission.

        • grue@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          4 hours ago

          There’s no need to “make it legal”, things are legal by default until a law is passed to make them illegal.

          Yes, and that’s already happened: it’s called “copyright law.” You can’t mix things with incompatible licenses into a derivative work and pretend it’s okay.

        • xigoi@lemmy.sdf.org
          link
          fedilink
          English
          arrow-up
          3
          arrow-down
          2
          ·
          12 hours ago

          By this logic, you can copy a copyrighted imege as long as you decrease the resolution, because the new image does not contain all the information in the original one.

          • yetAnotherUser@discuss.tchncs.de
            link
            fedilink
            English
            arrow-up
            2
            arrow-down
            1
            ·
            9 hours ago

            Am I allowed to take a copyrighted image, decrease its size to 1x1 pixels and publish it? What about 2x2?

            It’s very much not clear when a modification violates copyright because copyright is extremely vague to begin with.

            • grue@lemmy.world
              link
              fedilink
              English
              arrow-up
              1
              ·
              4 hours ago

              Just because something is defined legally instead of technologically, that doesn’t make it vague. The modification violates copyright when the result is a derivative work; no more, no less.

          • Voyajer@lemmy.world
            link
            fedilink
            English
            arrow-up
            3
            arrow-down
            1
            ·
            11 hours ago

            More like reduce it to a handful of vectors that get merged with other vectors.

          • FaceDeer@fedia.io
            link
            fedilink
            arrow-up
            1
            arrow-down
            1
            ·
            9 hours ago

            In the case of Stable Diffusion, they used 5 billion images to train a model 1.83 gigabytes in size. So if you reduce a copyrighted image to 3 bits (not bytes - bits), then yeah, I think you’re probably pretty safe.

    • merc@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      9
      arrow-down
      2
      ·
      16 hours ago

      It wouldn’t contain any public-domain data though. That’s the thing with LLMs, once they’re trained on data the data is gone and just added to the series of weights in the model somewhere. If it ingested something private like your tax data, it couldn’t re-create your tax data on command, that data is now gone, but if it’s seen enough private tax data it could give something that looked a lot like a tax return to someone with an untrained eye. But, a tax accountant would easily see flaws in it.