• the_third@feddit.de
    link
    fedilink
    arrow-up
    6
    arrow-down
    1
    ·
    1 year ago

    The model being able to generate something convincingly means it has seen equivalent examples, at least of parts of it in large enough quantity. That in itself means the model can’t exist in an ethical way.

    • Coffee Junky ❤️@beehaw.org
      link
      fedilink
      arrow-up
      18
      ·
      1 year ago

      I’m not sure that has to be true. Like you can ask an AI to give you a picture of a sailboat on the moon, while it has not ever seen a sailboat on the moon.

      It could be trained on photos that are not pornografic containing kids and images that are pornografic containing adults.

      • the_third@feddit.de
        link
        fedilink
        English
        arrow-up
        13
        ·
        1 year ago

        you can ask an AI to give you a picture of a sailboat on the moon

        Yes, correct. I’ll try to explain why that comparison isn’t entirely correct in this case and why my point stands: If you ask the model to draw an image of a sailboat on the moon it will take its context definition of “on the moon” and will likely end up selecting imagery of moon landscapes and will then put a sailboat in there. That sailboat will likely be assembled from frontal or sideviews of sailboats it has seen and will contain typical elements like a small bow pointing up and a keel line down the middle and some planks or a fibreglass-like structure to fill the are in between, depending on the style of things it has seen in the context of “lots of sailboat in this training picture”.

        If the model has never seen the underside of a sailboat it will likely reduce to “boat” and start to put a freightship or containership-type of bow and keel there, it probably has seen imagery of those in drydocks - the output wouldn’t look convincing to you as a viewer. In order to create a convincing sailboat in your example, the model needs a good idea what a sailboat looks like under the waterline. Which means, it has seen enough of that. Whithout further elaborating, I am sure you can understand how this implies massive ethical problems with generating a model for content that contains exploitative and abusive elements.