This sounds like a bad idea, there's already cases of people getting flagged for CSAM by sending photos of their children to doctors.
https://www.nytimes.com/2022/08/21/technology/google-surveillance-toddler-photo.html
Here's the article. Imagine losing access to everything that your physical driver's license can't help you get back. I would be in jail for one reason or another if google fucked my life over that bad
As for Mark, Ms. Lilley, at Google, said that reviewers had not detected a rash or redness in the photos he took and that the subsequent review of his account turned up a video from six months earlier that Google also considered problematic, of a young child lying in bed with an unclothed woman.
Mark did not remember this video and no longer had access to it, but he said it sounded like a private moment he would have been inspired to capture, not realizing it would ever be viewed or judged by anyone else.
They could have just made this up wholesale. What is Mark gonna do about it? He literally doesn't have access to the video they claim incriminates him, and the police department has already cleared him of any wrongdoing. Google is just being malicious at this point.
This seems like a lot of risky effort for something that would be defeated by even rudimentary encryption before sending?
Mind you if there were people insane enough to be sharing csam "in the clear" then it would be better to catch them than not. I just suspect most of what's going to be flagged by this will be kids making inappropriate images of their classmates
First: you'd probably be shocked how many pedos have zero opsec and just post shit/upload shit in the plain.
By which I mean most of them, because those pieces of crap don't know shit about shit and don't encrypt anything and just assume crap is private.
And second, yeah, I'll catch kids generating CSAM, but it'll catch everyone else too, so that's probably a fair trade.
That's kinda of what I was alluding to. If they have zero op sec, they're almost certainly sharing known csam too, and that's the kind of stuff where just the hashes can be used to catch them. But the hashes can be safely shared with any messaging service or even OS developer, because the hashes aren't csam themselves.
What I was calling "risky" about the above is it sounds like the first time law enforcement are sharing actual csam with a technology company so that that company can train an AI model on it.
Law enforcement have very well developed processes and safe guards about who can access csam and why and it's thoroughly logged and scrutinised and supported with therapy and so on .
Call me skeptical that these data companies that are putting in tenders to receive csam and develop models are going to have anywhere near the suitable level of safeguard and checks. Lowest bidder and all that.
So it all seems like a risky endeavour, and really it's only going to catch - as you say - your zero op sec paedo, but those people were going to get caught anyway, sharing regular csam detected with hashes.
So it seems like it has a really narrow target. And undertaken with significant risk. Seems like someone just wants to show "they're doing something". Or some data company made a reeeally glossy brochure..
first time law enforcement are sharing actual csam with a technology company
It's very much not: PhotoDNA, which is/was the gold standard for content identification, is a collaboration between a whole bunch of LEOs and Microsoft. The end user is only going to get a 'yes/no idea' result on a matched hash, but that database was built on real content working with Microsoft.
Disclaimer: below is my experience dealing with this shit from ~2015-2020, so ymmv, take it with some salt, etc.
Law enforcement is also rarely the first-responder to these issues, either: in the US, at least, reports will come to the hosting/service provider first for validation and THEN to NCMEC and LEOs, if the hosting provider confirms what the content is. Even reports that are sent from NCMEC to the provider aren't being handled by law enforcement as the first step, usually.
And as for validating reports, that's done by looking at it without all the 'access controls and safeguards' you think there are, other than a very thin layer of CYA on the part of the company involved. You get a report, and once PhotoDNA says 'no fucking clue, you figure it out' (which, IME, was basically 90% of the time) a human is going to look at it and make a determination, and then file a report with NCMEC or whatever, if it turns out to be CSAM.
Frankly, after having done that for far too fucking long, if this AI tool can reduce the amount of horrible shit someone doing the reviews has to look at, I'm 100% for it.
CSAM is (grossly) a big business, and the 'new content' funnel is fucking enormous and is why an extremely delayed and reactive thing like PhotoDNA isn't all that effective is that, well, there's a fuckload of children being abused and a fuckload of abusers escaping being caught simply because there's too much shit to look at and handle effectively and thus any response to anything is super super slow.
This looks like a solution to make it so less people have to be involved in validation, and could be damn near instant in responding to suspected material that does need validation, which will do a good job of at least pushing the shit out of easy (ier?) availability and out of more public spaces, which honestly, is probably the best thing that is going to be managed unless the countries producing this shit start caring and going after the producers which I'm not holding my breath on.
Wow I had no idea
It sounds like a much needed improvement then!
Any idea if Photo DNA needs training sets to the same extent AI does? It still feels like training currently LLM models, at least how I think they work, requires vast amounts of "examples"
It still feels like that amounts to putting huge amounts of task csam just "out there" with tech companies. If it saves a bunch of human moderators the toil of having to review quite so much then that's definitely a great help. But can you say anything about the comparative scale of the content involved? My impression is that previous versions of something like photoDNA would need a set of something for testing purposes. But the quantity needed to train AI is going to be vastly bigger (and therefore it's possible leak vastly worse?)
comparative scale of the content involved
PhotoDNA is based on image hashes, as well as some magic that works on partial hashes: resizing the image, or changing the focus point, or fiddling with the color depth or whatever won't break a PhotoDNA identification.
But, of course, that means for PhotoDNA to be useful, the training set is literally 'every CSAM image in existance', so it's not really like you're training on a lot less data than an AI model would want or need.
The big safeguard, such as it is, is that you basically only query an API with an image and it tells you if PhotoDNA has it in the database, so there's no chance of the training data being shared.
Of course, there's also no reason you can't do that with an AI model, either, and I'd be shocked if that's not exactly how they've configured it.
Ok. I mean I have no idea how government agencies organise this. If these are exceptional circumstances where a system needs exposing to "every csam image ever" then I would reasonably assume that justifies the one off cost of making the circumstances exceptionally secure. It's not like they're doing that every day.
You raise a separate important point. How is this technology actually used in practice? It sounds like photoDNA, being based on hashes, is a strictly one way thing, information is destroyed in the production of the hashing model. And the result can only be used to score a likelihood that a new image is csam or not. But AI is not like that, the input images are used to train a model and while the original images don't exist in there, it distills the 'essence' of what those photos are down into their encoded essence. And as such an AI model can be used both for detection and generation.
All this to say, perhaps there are ways for photoDNA to be embedded in systems safely, so that suspect csam doesn't have to be transmitted elsewhere. But I don't think an AI model of that type is safe to deploy anywhere. It feels like it would be too easy for the unscrupulous to engineer the AI to generate csam instead of detect it. So I would guess the AI solution would mean hosting the model in one secure place and suspect images having to be sent to it. But is that really scalable. It's a huge amount of suspect images from all sorts of messaging platforms we're talking about.
AI model of that type is safe to deploy anywhere
Yeah, I think you've made a mistake in thinking that this is going to be usable as generative AI.
I'd bet $5 this is just a fancy machine learning algorithm that takes a submitted image, does machine learning nonsense with it, and returns a 'there is a high probability this is an illicit image of a child', and not something you could use to actually generate CSAM with.
You want something that's capable of assessing the similarities between a submitted image and a group of known bad images, but that doesn't mean the dataset is in any way usable for anything other than that one specific task - AI/ML in use cases like this is super broad and has been a thing for decades before the whole 'AI == generative AI' thing became what everyone is thinking.
But, in any case: the PhotoDNA database is in one place and access to it is scaled by the merit of uh, lots of money?
And of course, any 'unscrupulous engineer' that may have any plans for doing anything with this is probably not a complete idiot, even if a pedo: they're going to have shockingly good access controls and logging and well, if you're in the US, if the dude takes this database and generates a couple of CSAM images using it, the penalty is, for most people, spending the rest of their life in prison.
Feds don't fuck around with creation or distribution charges.
Yes true
Yeah, I think you’ve made a mistake in thinking that this is going to be usable as generative AI.
Possibly not on its own but that's not really the issue: Once you have a classifier you can use its judgements to train a generator. PhotoDNA faces the same issue that's the reason why it's not available to the general public.
Technology
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related content.
- Be excellent to each another!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, to ask if your bot can be added please contact us.
- Check for duplicates before posting, duplicates may be removed