cross-posted from: https://lemmy.dbzer0.com/post/4500908
In the past months, there’s a been a issue in various instances where accounts would start uploading blatant CSAM to popular communities. First of all this traumatizes anyone who gets to see it before the admins get to it, including the admins who have to review to take it down. Second of all, even if the content is a link to an external site, lemmy sill caches the thumbnail and stores it in the local pict-rs, causing headaches for the admins who have to somehow clear that out. Finally, both image posts and problematic thumbnails are federated to other lemmy instances, and then likewise stored in their pict-rs, causing such content to be stored in their image storage.
This has caused multiple instances to take radical measures, from defederating liberaly, to stopping image uploads to even shutting down.
Today I’m happy to announce that I’ve spend multiple days developing a tool you can plug into your instance to stop this at the source: pictrs-safety
Using a new feature from pictr-rs 0.4.3 we can now cause pictrs to call an arbitary endpoint to validate the content of an image before uploading it. pictrs-safety builds that endpoint which uses an asynchronous approach to validate such images.
I had already developed fedi-safety which could be used to regularly go through your image storage and delete all potential CSAM. I have now extended fedi-safety to plug into pict-rs safety and scan images sent by pict-rs.
The end effect is that any images uploaded or federated into your instance will be scanned in advance and if fedi-safety thinks they’re potential CSAM, they will not be uploaded to your image storage at all!
This covers three important vectors for abuse:
- Malicious users cannot upload CSAM to for trolling communities. Even novel GenerativeAI CSAM.
- Users cannot upload CSAM images and never submit a post or comment (making them invisible to admins). The images will be automatically rejected during upload
- Deferated images and thumbnails of CSAM will be rejected by your pict-rs.
Now, that said, this tool is AI-driven and thus, not perfect. There will be false positives, especially around lewd images and images which contain children or child-topics (even if not lewd). This is the bargain we have to take to prevent the bigger problem above.
By my napkin calculations, false positive rates are below 1%, but certainly someone’s innocent meme will eventually be affected. If this happen, I request to just move on as currently we don’t have a way to whitelist specific images. Don’t try to resize or modify the images to pass the filter. It won’t help you.
For lemmy admins:
- pictrs-safety contains a docker-compose sample you can add to your lemmy’s docker-compose. You will need to your put the .env in the same folder, or adjust the provided variables. (All kudos to @[email protected] for the docker support).
- You need to adjust your pict-rs ENVIRONMENT as well. Check the readme.
- fedi-safety must run on a system with GPU. The reason for this is that lemmy provides just a 10-seconds grace period for each upload before it times out the upload regardless of the results. A CPU scan will not be fast enough. However my architecture allows the fedi-safety to run on a different place than pictrs-safety. I am currently running it from my desktop. In fact, if you have a lot of images to scan, you can connect multiple scanning workers to pictrs-safety!
- For those who don’t have access to a GPU, I am working on a NSFW-scanner which will use the AI-Horde directly instead and won’t require using fedi-safety at all. Stay tuned.
For other fediverse software admins
fedi-safety can already be used to scan your image storage for CSAM, so you can also protect yourself and your users, even on mastodon or firefish or whatever.
I will try to provide real-time scanning in the future for each software as well and PRs are welcome.
Divisions by zero
This tool is already active now on divisions by zero. It’s usage should be transparent to you, but do let me know if you notice anything wrong.
Support
If you appreciate the priority work that I’ve put in this tool, please consider supporting this and future development work on liberapay:
All my work is and will always be FOSS and available for all who need it most.
deleted by creator
To be fair there are now off the shelf ai solutions available which were simply impossible 10 or even 5 years ago.
deleted by creator
Bold of you to assume Reddit wants to block such a thing.
deleted by creator
In the interest of creating as little load as possible for the eventual AI Horde cluster, will there be an option to only check federated images?
That would depend on lemmy and pict-rs devs providing such classification. If it exists, I can support it.
Any plan to integrate with lemmy directly and check those as well, removing the post if triggered?
That might be more load than your worker can serve. But this is theoretically already possibly using pythorhead and parsing every incoming comment for image links, like an automoderator. You don’t need pictrs-safety for this.
Be careful with this though. I think I remember some jurisdictions require server owners not to delete CSAM and report it instead. Verify that you aren’t obligated to keep it before deleting it
not upload CSAM to for trolling communities. Even novel GenerativeAI CSAM. Users cannot upload CSAM images and never submit a post or comment (making them invisible to admins). The images will be automatically rejected during upload
There wouldn’t be anything to delete, as it would have never been saved with this.
deleted by creator
HEY LOCAL PD OFFICE,
SOMEONE TRIED TO UPLOAD SOME POTENTIALLY CHILD PORN TO MY LEMMY INSTANCE.
No… I don’t have an IP for who uploaded it.
Sorry, I don’t know where it came from. It just got federated across the fediverse to me.
No… I don’t have the content either, it doesn’t get saved.
Sorry… I guess I really don’t have any details at all for you.
hosting a lemmy instance in the us is a headache
Hence my reaction to these issues. https://lemmyonline.com/post/459013
But… under new management now, in Germany. https://lemmyonline.com/post/587565
Nearly every country has strong Anti-CSAM laws on the book which require reporting known distribution. I don’t think that’s a bad thing.
deleted by creator
So the image never touches the server side, even in RAM, it always remains only on the client machine, and it’s checked there?
If so, then this could be a pretty neat tidy way to deal with this issue, otherwise the image is on the server, even if you “delete it real fast” or such, and I imagine then you’d still need to be in compliance with the law regarding saving and reporting it.
Did you read the post? The image is sent to an endpoint that has a hosted AI solution that checks it
It 100% touches the server, it’s just not stored anywhere and gets blocked
Does that leave open a possible attack, in which the attacker can just fill up the server’s hard drive with AI-generated CSAM?
I think that if, in good faith, the person is unable to accept more CSAM due to the fact that their hard drive is full, there isn’t an issue. The intent of the law is that, it someone knows something is CSAM, they need to report it. I don’t think the government is going to come hard on Lemmy server owners unwittingly receiving CSAM through federation (though they certainly would want them to report and take down the CSAM on their servers)
It’s not getting uploaded, so nothing to keep
That’s the point The Kiddy Porn never hits the server There might be an argument for the scanner cache to be saved for later reporting to authorities Thank is assuming the scanner also logs the account, ip, time, etc of the upload
Thank you so much for caring, well done!
🫡 Thank you for your service comrade.
I’m curious. How do you train such AI without being raided by the authorities?
You don’t tell them.
Offload all work to an anonymous VPS provider possibly? I dunno just spit balling.
deleted by creator
deleted by creator
Thanks dude. If we ever meet in person I owe you a beer.
I have a ko-fi too, just saying… 😁
Looks like a really good solution to the problem, even a false positive of 1% seems like a small trade-off considering the amount of spam and rubbish posted.
This is very cool! Too bad I don’t have access to vps with gpu to try it at the moment.
Is it possible to offer this as a service with a small monthly fee (e.g. on demand pricing depending how many images you scan) or donations, so instance owners without gpu can use it?
I’ve love to, but there’s legal concerns about the transfer of potential CSAM to third party services which I’d rather not think about.
As you said this may have to be the bargain of the fediverse. I think a democratic process on the training of said ai might be what gives the best outcomes from this.
So I don’t know if I missed something, but I don’t see an explanation of what CSAM is…
Child Sexual Abuse Materials iirc
Really glad I didn’t Google that…
When did that term change? Last I heard it was CP
Many survivors of child sexual exploitation and their advocates have asked media sources and officials to use CSAM instead of child porn because porn implies a level of consent that children are unable to give. Adults can choose to make porn, but children can only be exploited.
deleted by creator
deleted by creator
deleted by creator
PhotoDNA requires a lot more bureaucratic work than most instance admins can handle, but if you really want it, you can easily plug it into pictrs-safety instead.
However PhotoDNA will not catch novel generativeAI CSAM.
deleted by creator
You and especially your users won’t know a photorealistic generative AI image is real or not.
deleted by creator
I think you were merely being pedantic, but there are some interesting points in there.
Is it a crime to generate fake “csam”?
Should it be a crime?
How can prosecutors get convictions against a defense of “no, your honour, that video is AI-generated”?
What we have now is still miles off general AI, but it’s going to take years for society to catch up. Interesting times.
Ah the kinds of comments I quit reddit to no longer see…
Well on the bright side, at least they get downvoted here.
Microsoft’s PhotoDNA
My issue with these services is that they aren’t available for non-US people. db0’s project can be deployed anywhere (provided you have a capable GPU).
deleted by creator
That also isn’t available for non-US people.
It’s available to every Cloudflare user, US or global.
PhotoDNA is also available for every website in the world.
It isn’t. I already tried applying for both. You need NCMEC credentials which is only available for those in the US.
edit: Here’s a comment I made about it.
deleted by creator
I don’t see the problem here. What makes you think that the false positives in this case is “unacceptable”? So what if Joe Bloggs isn’t able to share a picture of a random kid (why tho) or an image of a child-like person?
deleted by creator
Unnecessary censorship is fine when it’s clearly a underaged person. You don’t need to check their ID to tell if it’s CSAM, and you don’t need to as well with generated child stuff. If you want to debate it’s legality, that’s a diff conversation, but even an AI generated version is enough to mentally scar the viewer, so there is still harm being done.
deleted by creator
Again, what you’re saying isn’t relevant to Lemmy at all. Please elaborate how would a graphics card on some random server help protect actual victims?
deleted by creator
PhotoDNA isn’t run by Microsoft anymore, but by the International Centre for Missing and Exploited Children.
My friend, you haven’t heard about Oracle.
Microsoft at least gave the world Powershell, to balance out their sins. I can also name other good things they have done. Oracle is pure and deliberate evil.
I believe that the human race will end in one of three ways:
- asteroid strike
- disease
- Oracle
Or just turn off image uploads