I know that scanning images for Scam is kind if a dystopian and scary. However, that doesn’t mean that we need to open ourselves for abusive materials to be sent to us.

What I think we need is some publicly available ML models that can be run on each device voluntary to block SCAM from being shown or stored.

Publicly available models would help but implementing them could be a slippery sloap. If popular encrypted messaging apps start having this feature built in its possible it will become illegal to turn it off or use versions of the app with scanner removed. This would mean that we would effectively stuck with a bad egg in our code.

Maybe the best answer is to not give individuals with questionable history the ability to message you.

Does anyone else have a thought?

  • ono@lemmy.ca
    link
    fedilink
    English
    arrow-up
    25
    arrow-down
    1
    ·
    1 year ago

    The point of CSAM scanners is not to protect children, but to circumvent due process by expanding warrantless surveillance. That is antithetical to FOSS.

    So, in a word, no.

    • Possibly linux@lemmy.zipOP
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      11
      ·
      1 year ago

      So you like child porn? I want a way to block bad content from being received and displayed

      • uk_@lemmy.fmhy.net
        link
        fedilink
        arrow-up
        2
        ·
        1 year ago

        You have to rely on a 3rd party that provides you hashes or what not to identify images. And that is a business model. Or you could create a DB with hashes (aka getting a yourself) I think you will bring you in all kinds of legal troubleds this way. Or you create a algo for that work and burn through a hell lot of GPU hours (welcome back to a business model)

  • palitu
    link
    fedilink
    arrow-up
    14
    ·
    1 year ago

    There Is a tool that someone built directly to scan images uploaded to lemmy for CSAM.

    It is really quite clever. The image is put through a ML/AI model, which describes it (Imange to text), then the text is reviewed against a set of rules to see if it has the hallmarks of CSAM. If it does, it is deleted.

    This is fully self hosted.

    What I like is that it avoids the trauma of a person having to see those sort of things

      • palitu
        link
        fedilink
        arrow-up
        8
        ·
        1 year ago

        you mean the ML model?

        I dont think it is too bad, as it is more like look for a description that has children and a sexual context. This can be trained without CSAM as the model generalises situations it has seen before - a pornographic picture (sexual context) and kids playing at a platground (children in the scene).

  • Cypher@lemmy.world
    link
    fedilink
    arrow-up
    6
    arrow-down
    2
    ·
    1 year ago

    The justifications for closed source scanners are slim, even knowing how a scanner works it would be difficult for CSAM to be altered to completely avoid detection and those gaps could quickly be closed.

    We need an open source scanner that can be integrated safely and with trust into FOSS.

    This will only happen with government permission as anyone developing this without permission obviously opens themselves up to legal action.

    The FOSS community needs to get Governments on side with this but I don’t know where lobbying would be best started. Potentially the EU would be most receptive to this approach?

  • WhoRoger@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    edit-2
    1 year ago

    Don’t see why not. You can download a database of hashes and compare that locally. Granted, those hashes aren’t “free”, but that’s due to the legal status of such material. The principle itself - comparing hashes - can be foss.

    Yea people can look into the algorithms to see how they work and circumvent etc., but that’s no different than with… Anything else. If someone is motivated enough to distribute the material, they’ll make their own network. Foss doesn’t make any difference here.