GitHub - Haidra-Org/lemmy-safety: A script that goes through a lemmy pict-rs object storage and tries to prevent illegal or unethical content
github.com
external-link
A script that goes through a lemmy pict-rs object storage and tries to prevent illegal or unethical content - GitHub - Haidra-Org/lemmy-safety: A script that goes through a lemmy pict-rs object sto...

I noticed a bit of panic around here lately and as I have had to continuously fight against pedos for the past year, I have developed tools to help me detect and prevent this content.

As luck would have it, we recently published one of our anti-csam checker tool as a python library that anyone can use. So I thought I could use this to help lemmy admins feel a bit more safe.

The tool can either go through all your images via your object storage and delete all CSAM, or it canrun continuously and scan and delete all new images as well. Suggested option is to run it using --all once, and then run it as a daemon and leave it running.

Better options would be to be able to retrieve exact images uploaded via lemmy/pict-rs api but we’re not there quite yet.

Let me know if you have any issue or improvements.

EDIT: Just to clarify, you should run this on your desktop PC with a GPU, not on your lemmy server!

Thank you for helping make the fediverse a better place.

@zoe@aussie.zone
link
fedilink
English
-31Y

anything worthy is illegal in the us, get gud! thankfully Lemmy is hosted in europe or any vps somewhere

Just going to argue on behalf of the other users who know apparently way more than you and I do about this stuff:

WhY nOt juSt UsE thE FBi daTaBaSe of CSam?!

(because one doesn’t exist)

(because if one existed it would either be hosting CSAM itself or showing just the hashes of files - hashes which won’t match if even one bit is changed due to transmission data loss / corruption, automated resizing from image hosting sites, etc)

(because this shit is hard to detect)

Some sites have tried automated detection of CSAM images. Youtube, in an effort to try to protect children, continues to falsely flag 30 year old women as children.

OP, I’m not saying you should give up, and maybe what you’re working on could be the beginning of something that truly helps in the field of CSAM detection. I’ve got only one question for you (which hopefully won’t be discouraging to you or others): what’s your false-positive (or false-negative) detection rate? Or, maybe a question you may not want to answer: how are you training this?

db0
creator
link
fedilink
English
4
edit-2
1Y

I’m not training it. Im using publicly available clip models.

The false positive rate is acceptable. But my method is open source so feel free to validate on your end

Acceptable isn’t a percentage, but I see in your opinion that it’s acceptable. Thanks for making your content open source. I do wish your project the best of luck. I don’t think I have what it takes to validate this myself but if I end up hosting an instance I’ll probably start using this tool more often myself. It’s better than nothing at at present I have zero instances but also zero mods lined up.

@chrisbit@leminal.space
link
fedilink
English
10
edit-2
1Y

Thanks for releasing this. After doing a --dry_run can the flagged files then be removed without re-analysing all images?

db0
creator
link
fedilink
English
141Y

Not currently supported. It’s on my to-do

@donut4ever@lemm.ee
link
fedilink
English
91Y

This is awesome. Thank you for making it.

Nowhereman
link
fedilink
English
31Y

Can this be used with the Lemmy-easy-deploy method?

db0
creator
link
fedilink
English
41Y

This shouldn’t run on your lemmy server (unless your lemmy server has a gpu)

Nowhereman
link
fedilink
English
11Y

I can put one in…

db0
creator
link
fedilink
English
21Y

I don’t know your setup, but unless it’s a very cheap GPU, it would be a bit of a waste to use it only for this purpose. But up to you

snowe
link
fedilink
English
1031Y

Hey @db0@lemmy.dbzer0.com, just so you know, this tool is most likely very illegal to use in the USA. Something that your users should be aware of. I don’t really have the energy to go into it now, but I’ll post what I told my users in the programming.dev discord:

that is almost definitely against the law in the USA. From what I’ve read, you have to follow very specific procedures to report CSAM as well as retain the evidence (yes, you actually have to keep the pictures), until the NCMEC tells you you should destroy the data. I’ve begun the process to sign up programming.dev (yes you actually have to register with the government as an ICS/ESP) and receive a login for reports.

If you operate a website, and knowingly destroy the evidence without reporting it, you can be jailed. It’s quite strange, and it’s quite a burden on websites. Funnily enough, if you completely ignore your website, so much so that you don’t know that you’re hosting CSAM then you are completely protected and have no obligation to report (in the USA at least)

Also, that script is likely to get you even more into trouble because you are knowingly transmitting CSAM to ‘other systems’, like dbzer0’s aihorde cluster. that’s pretty dang bad…

here are some sources:

deleted by creator

snowe
link
fedilink
English
71Y

the ridiculous part of it is, as I understand it, if you completely ignore your website and essentially never know that you’re hosting CSAM then you cannot be held liable for it. But then, someone’s probably literally gonna come hunt you down to tell you in person (FBI) lol. So probably best to not ignore it.

db0
creator
link
fedilink
English
54
edit-2
1Y

Note that the script I posted is not transmitting the images to the AI Horde.

Also keep in mind this tool is fully automated and catches a lot of false positives (due to the nature of the scan, it couldn’t be otherwise). So one could argue it’s a generic filtering operation, not an explicit knowledge of CSAM hosting. But IANAL of course.

This is unlike cloudflare or other services which compare with known CSAM.

EDIT: That is to mean, if you use this tool to forward these images to the govt, they are going to come after you for spamming them with garbage

snowe
link
fedilink
English
28
edit-2
1Y

Cloudflare still has false positives, the NCMEC does not care if they get false positives. If you read some of those links I provided it wouldn’t be considered a generic filtering operation, from how I’m reading it at least. I wouldn’t take the chance, especially not with running the software on your own hardware in your own house, split from the server.

I think you’re not in the US? So it’s probably different for your jurisdiction. Just want to make it clear that in the US, from what i’ve read up on, this would be considered against the law. You are running software to filter for CSAM, so you are obligated to report it. Up to 1 year jail time for not doing so.

Carlos Solís
link
fedilink
English
181Y

Nothing that can’t be fixed by adding a quarantine option instead of deleting the offending picture. Hopefully someone can upload a patch for that?

db0
creator
link
fedilink
English
161Y

One can easily hook this script to forward to whoever is needed, but I think they might be a bit annoyed after you send them a couple hundred thousand false positives without any csam.

snowe
link
fedilink
English
181Y

The problem is you aren’t warning people that deleting CSAM without following your applicable laws can potentially get people that use your tool thrown in jail. You went ahead and built the tool without detailing any of the applicable laws around it. Cloudflare explicitly calls out that in their documentation because it’s very important. I really like the stuff you put out, but this is not the way to do it. I know lots of people on Lemmy hate CF and any sort of large company, but running this stuff yourself without understanding the law is sure to get someone in trouble.

I don’t even know why you think I was recommending for your system to forward the reports to the authorities. I didn’t sleep very much last night, so I must have glazed over it, but I see nowhere where I said that.

db0
creator
link
fedilink
English
111Y

Honestly, I thinking you’re grossly overstating the legal danger a random small lemmy sysadmin is going to get into for running an automated tool like this.

In any case, you’ve made your point, people can now make their own decisions on whether it’s better to pretend nothing is wrong on their instance, or if they want at least this sort of blanket cleanup. Far be it from me to tell anyone what to do.

I don’t even know why you think I was recommending for your system to forward the reports to the authorities

You may not have meant it, but you strongly implied something of the sort. But since this is not what you’re suggesting I’m curious to hear what your optimal approach to those problem would be here.

snowe
link
fedilink
English
41Y

You may not have meant it, but you strongly implied something of the sort. But since this is not what you’re suggesting I’m curious to hear what your optimal approach to those problem would be here.

Optimal approach is to use the existing systems that are used by massive corporations to solve this problem already. I know everyone on lemmy hates that, but this isn’t something to mess around with. The reason this is optimal is because NCMEC provides the hashes only to these companies. You’re not going to be able to get the hashes (this is a good thing… imagine some child abuser getting access to these hashes and then using them to evade detection). So if you can’t get these hashes (and you shouldn’t want them either) then you should use a service that has them. It is by far the best way to filter and has been proven time and time again to be successful.

The easiest is CloudFlare’s, and yes, you will have to use them as your DNS which I also understand a vast majority of admins hate. But there are other options as well

  • PhotoDNA
  • Safer
  • Facebook PDQ

Because access to the original hash databases is considered sensitive, NCMEC will not provide these to smaller platforms. Neither will Microsoft provide the source code of its PhotoDNA algorithm except to its most trusted partners, because if the algorithm became widely known, it is thought that this might enable abusers to bypass it.

In that article, it actually points out that a solution called Safer that uses machine learning and image recognition has very flawed results and is incredibly biased. So if these massive platforms can’t get this kind of image recognition right then it’s probably best to not waste money and time on it. The article even points out that for smaller platforms it’s not worth it.

We also know in general terms that machine learning algorithms for image recognition tend to be both flawed overall, and biased against minorities specifically. In October 2020, it was reported that Facebook’s nudity-detection AI reported a picture of onions for takedown. It may be that for largest platforms, AI algorithms can assist human moderators to triage likely-infringing images. But they should never be relied upon without human review, and for smaller platforms they are likely to be more trouble than they are worth

db0
creator
link
fedilink
English
41Y

I look forward to see your success with that approach. Godspeed

@hoodlem@hoodlem.me
link
fedilink
English
181Y

Ugh, what a mess. Thought about this for a while today and three thoughts started circulating in my head:

  1. Hire an actual lawyer and get firm legal advice on this issue. I think this would fall to the admins, not the devs. Maybe an admin who wanted could volunteer to contact a lawyer? We could do a gofundme for one-time consultation legal fees.

  2. Stop using pictrs completely and instead use links to a third party such as Imgur or whatever. They’re in this business and I’m sure already have dealt with it and have a solution. Yes it sucks that Imgur (or whatever third party) could delete our legitimate images at any time, but IMHO it’s worth it to avoid this headache. At any rate it offloads the liability from an admin. Of course, IANAL and this is a question we would want to ask a lawyer about.

  3. Needing a GPU increases the expenses for an admin significantly. It will start to not be worth it for quite a few to keep their instance running.

Thanks for bringing up this point. This is obviously a nuanced issue that is going to need a well-thought-out solution.

db0
creator
link
fedilink
English
51Y

The GPU doesn’t have to be high-end, and can run on someone’s PC

Depending on the country, those laws may be different. Here is a story of a guy who ran a TOR exit node in Australia who would have been protected as a company (law was later changed).

https://lowendbox.com/blog/man-found-guilty-of-child-porn-because-he-ran-a-tor-exit-node-the-story-of-william-weber/

Cyborganism
link
fedilink
English
281Y

I don’t host a server myself, but can this tool identify the users who posted the images and create a report with their IP addresses?

This could help identify who spreads that content and it can be used to notify authorities. No?

db0
creator
link
fedilink
English
301Y

No but it will record the object storage We then need a way to connect that path to the pict-rs image ID, and once we do that, connect the pict-rs image ID to the comment or post which uploaded it. I don’t know how to do the last two steps however, so hopefully someone else will step up for this

Not well versed in the field, but understand that large tech companies which host user-generated content match the hashes of uploaded content against a list of known bad hashes as part of their strategy to detect and tackle such content.

Could it be possible to adopt a strategy like that as a first-pass to improve detection, and reduce the compute load associated with running every file through an AI model?

@dan@upvote.au
link
fedilink
English
16
edit-2
1Y

match the hashes

It’s more than just basic hash matching because it has to catch content even if it’s been resized, cropped, reduced in quality (lower JPEG quality with more artifacts), colour balance change, etc.

Ah, of course - that’s unfortunate, but thanks for the pointer.

Well, we have hashing algorithms that do exactly that, like phash for example.

@dan@upvote.au
link
fedilink
English
21Y

Definitely. A lot of the good algorithms used by big services are proprietary though, unfortunately.

Can you point me to some of them? I’m quite interested in visual hashing.

@dan@upvote.au
link
fedilink
English
4
edit-2
1Y

Microsoft’s PhotoDNA is probably the most well-known. Every major service that has user-generated content uses it. Last I checked, it wasn’t open-source. It was built for detecting CSAM, but it’s really just a general-purpose similarity hashing algorithm.

Meta has some algorithms that are open-source: https://about.fb.com/news/2019/08/open-source-photo-video-matching/

Google has CSAI Match for hash-matching of videos and Google Content Safety API for classification of new content, but both are proprietary.

db0
creator
link
fedilink
English
1
edit-2
1Y

There’s better approaches than hashing. For comparing images I am calculating “distance” in tensors between them. This can match even when compression artifacts are involved or the images are slightly altered.

nick
link
fedilink
English
31Y

I think deleting images from the pictrs storage can corrupt the pictrs sled db so I would not advise it, you should go via the purge endpoint on the pictrs API.

db0
creator
link
fedilink
English
11Y

Nah. It will just not find those images to serve

nick
link
fedilink
English
11Y

Interesting, when I tried a while back it broke all images (not visible on the website due to service worker caching but visible if you put any pictrs url into postman or something)

db0
creator
link
fedilink
English
11Y

Well you can clearly see images still here ;)

nick
link
fedilink
English
1
edit-2
1Y

True, you’re correct. I’m just not sure how you did it without corrupting the sled db. Maybe I’m just unlucky

db0
creator
link
fedilink
English
11Y

the sled db is not touched. It’s just that when pict-rs is trying to download the file pointed by the sleddb, it’s get a 404

possibly a cat
link
fedilink
English
31Y

I saw you mention that you were working on this - props for getting it out so quickly!

A10@kerala.party
link
fedilink
English
231Y

Don’t have a GPU on my server. How is performance on the CPU ?

@Rescuer6394@feddit.nl
link
fedilink
English
101Y

the model under the hood is clip interrogator, and it looks like it is just the torch model.

it will run on cpu, but we can do better, an onnx version of the model will run a lot better on cpu.

db0
creator
link
fedilink
English
111Y

sure, or a .cpp. But it will still not be anywhere near as good as a GPU. However it might be sufficient for something just checking new images

@relic_@lemm.ee
link
fedilink
English
21Y

I’m not really convinced that a GPU backend is needed. Was there ever a comparison of the different CLIP model variants? Or a graph optimized / quantized ONNX version?

I think the proposed solution makes a lot of sense for the task at hand if it were integrated on the pic-rs end, but it would be worth investigating further improvements if it were on the lemmy server end.

db0
creator
link
fedilink
English
51Y

For scanning all existing images, trust me a good GPU is necessary. I’m scanning all my backend on a 4090 with 400 threads and I’m still only halfway through after 4 hours.

For scanning newly uploaded images, a CPU might be sufficient but the users might get annoyed at the wait times.

db0
creator
link
fedilink
English
431Y

It will be atrocious. You can run it, but you’ll likely be waiting for weeks if not months.

veroxii
link
fedilink
English
731Y

This is extremely cool.

Because of the federated nature of Lemmy many instances might be scanning the same images. I wonder if there might be some way to pool resources that if one instance has already scanned an image some hash of it can be used to identify it and the whole AI model doesn’t need to be rerun.

Still the issue of how do you trust the cache but maybe there’s some way for a trusted entity to maintain this list?

@Starbuck@lemmy.world
link
fedilink
English
111Y

TBH, I wouldn’t be comfortable outsourcing the scanning like that if I were running an instance. It only takes a bit of resources to know that you have done your due diligence. Hopefully this can get optimized to get time to be faster.

@irdc@derp.foo
link
fedilink
English
18
edit-2
1Y

How about a federated system for sharing “known safe” image attestations? That way, the trust list is something managed locally by each participating instance.

Edit: thinking about it some more, a federated image classification system would allow some instances to be more strict than others.

@huginn@feddit.it
link
fedilink
English
151Y

Consensus algorithms. But it means there will always be duplicate work.

No way around that unfortunately

@kbotc@lemmy.world
link
fedilink
English
9
edit-2
1Y

Why? Use something like RAFT, elect the leader, have the leader run the AI tool, then exchange results, with each node running it’s own subset of image hashes.

That does mean you need a trust system, though.

@irdc@derp.foo
link
fedilink
English
91Y

As I’m saying, I don’t think you need to: manually subscribing to each trusted instance via ActivityPub should suffice. The pass/fail determination can be done when querying for known images.

@huginn@feddit.it
link
fedilink
English
81Y

Yeah that works. Who is the leader and how does it change? Does Lemmy.World take over because it’s largest?

@kbotc@lemmy.world
link
fedilink
English
81Y

Hash the image, then assign hash ranges to servers that are part of the ring. You’d use RAFT to get consensus about who is responsible for which ranges. I’m largely just envisioning the Scylla gossip replacement as the underlying communications protocol.

gabe [he/him]
link
fedilink
English
281Y

I think building such a system of some kind that can allow smaller instances to rely from help from larger instances would be extremely awesome.

Like, lemmy has the potential to lead the fediverse is safety tools if we put the work in.

@neutron@thelemmy.club
link
fedilink
English
131Y

I’d rather have a text-only instance with no media at all. Can this be done?

Rentlar
link
fedilink
English
171Y

Yes it is definitely possible! Just have no pictrs installed/running with the server. Note it will still be possible to link external images.

@Morgikan@lemm.ee
link
fedilink
English
121Y

My understanding was it’s bad practice to host images on Lemmy instances anyway as it contributes to storage bloat. Instead of coming up with a one-off script solution (albeit a good effort), wouldn’t it make sense to offload the scanning to a third party like imgur or catbox who would already be doing that and just link images into Lemmy? If nothing else wouldn’t that limit liability on the instance admins?

@hoodlem@hoodlem.me
link
fedilink
English
8
edit-2
1Y

I was thinking the same thing. Stop storing the images and offload to Imgur or whatever. They likely already have a solution for this issue. Show images inline instead of a link. Looks the same, no liability.

Saying that, this is tremendously cool. I was given pause though by another poster on the thread mentioning the legality of using this in the U.S.

Rentlar
link
fedilink
English
4
edit-2
1Y

Might be what we’d need to do for small servers lacking moderation, wanting to avoid the liability from potentially hosting harmful images.

I used postimg.cc when hosting was having issues, I’ll probably use it more to ease up Lemmy admins’ jobs.

RCMaehl [Any]
link
fedilink
English
18
edit-2
1Y

Hi db0, if I could make an additional suggestion.

Add detection of additional content appended or attached to media files. Pict-rs does not reprocess all media types on upload and it’s not hard to attach an entire .zip file or other media within an image (https://wiki.linuxquestions.org/wiki/Embed_a_zip_file_into_an_image)

db0
creator
link
fedilink
English
171Y

Currently I delete on PIL exceptions. I assume if someone uploaded a .zip to your image storage, you’d want it deleted

RCMaehl [Any]
link
fedilink
English
51Y

As @Starbuck@lemmy.world stated. They’re still valid image files, they just have extra data.

@Starbuck@lemmy.world
link
fedilink
English
91Y

The fun part is that it’s still a valid JPEG file if you put more data in it. The file should be fully re-encoded to be sure.

db0
creator
link
fedilink
English
31Y

In that case, PIL should be able to read it, so no worries

@Starbuck@lemmy.world
link
fedilink
English
61Y

But I could take ‘flower.jpg’, which is an actual flower, and embed a second image, ‘csam.png’ inside it. Your scanner would scan ‘flower.jpg’, find it to be acceptable, then in turn register ‘csam.png’. Not saying that this isn’t a great start, but this is the reason that a lot of websites that allow uploads re-encode images.

db0
creator
link
fedilink
English
81Y

my pict-rs already re-encodes everything. This is already a possibility for lemmy admins

@Starbuck@lemmy.world
link
fedilink
English
21Y

Good to hear they have that covered already. Looks like a great tool!

CaptainBlagbird
link
fedilink
English
381Y

How do you even safely test scripts/tools like this 😵‍💫

I’d bet there’s a CSAM test image dataset with innocuous images that get picked up by the script. Not sure how the system works, but if it’s through hashes then it would be pretty simple to add that to the script.

Create a post

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don’t control.

Rules:

  1. Be civil: we’re here to support and learn from one another. Insults won’t be tolerated. Flame wars are frowned upon.

  2. No spam posting.

  3. Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it’s not obvious why your post topic revolves around selfhosting, please include details to make it clear.

  4. Don’t duplicate the full text of your blog or github here. Just post the link for folks to click.

  5. Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).

  6. No trolling.

Resources:

Any issues on the community? Report it using the report flag.

Questions? DM the mods!

  • 1 user online
  • 122 users / day
  • 418 users / week
  • 1.16K users / month
  • 3.85K users / 6 months
  • 1 subscriber
  • 3.68K Posts
  • 74.2K Comments
  • Modlog