Never talk morals with libs because these are the things they spend their time questioning. Whether or not it’s okay to use AI to detect CP. Real thinker there

https://lemm.ee/post/12171882

  • WhatDoYouMeanPodcast [comrade/them]@hexbear.net
    link
    fedilink
    English
    arrow-up
    3
    ·
    11 months ago

    Only moral quandary that comes to mind is that if you give an AI a bunch of CSAM then it leaks then 1) someone has it 2) AI gets better at generating it. Who’s training AI? Who verifies it? Security protocols?

    • drhead [he/him]@hexbear.net
      link
      fedilink
      English
      arrow-up
      7
      ·
      11 months ago

      This would be a classifier model, incapable of making images. Most classifier models only output a dictionary of single floating point values for each class it is trained on representing the chance the image belongs to that class. It’d probably be trained by an organization like NCMEC, possibly working with a very well trusted AI firm. For verifying it, usually you reserve a modest representative sample of the database and don’t train on it, then use that to determine how accurate the model is and to decide on what score threshold is appropriate for flagging.