274
submitted 10 months ago by girlfreddy@lemmy.ca to c/world@lemmy.world

A sex offender convicted of making more than 1,000 indecent images of children has been banned from using any “AI creating tools” for the next five years in the first known case of its kind.

Anthony Dover, 48, was ordered by a UK court “not to use, visit or access” artificial intelligence generation tools without the prior permission of police as a condition of a sexual harm prevention order imposed in February.

The ban prohibits him from using tools such as text-to-image generators, which can make lifelike pictures based on a written command, and “nudifying” websites used to make explicit “deepfakes”.

Dover, who was given a community order and £200 fine, has also been explicitly ordered not to use Stable Diffusion software, which has reportedly been exploited by paedophiles to create hyper-realistic child sexual abuse material, according to records from a sentencing hearing at Poole magistrates court.

you are viewing a single comment's thread
view the rest of the comments
[-] JackGreenEarth@lemm.ee 81 points 10 months ago* (last edited 10 months ago)

As a UK citizen, I'm ashamed of my government.

I am firmly against child abusers, but AI images don't harm anyone and are a safe and harmless way for pedophiles to fulfil their urges, which they cannot control.

[-] Tippon@lemmy.dbzer0.com 33 points 10 months ago

Where does the training data come from to create indecent images of children?

[-] Dran_Arcana@lemmy.world 52 points 10 months ago* (last edited 10 months ago)

It doesn't need csam data for training, it just needs to know what a boob looks like, and what a child looks like. I run some sdxl-based models at home and I've observed it can be difficult to avoid more often than you'd think. There are keywords in porn that blend the lines across datasets ("teen", "petite", "young", "small" etc). The word "girl" in particular I've found that if you add that to basically any porn prompt gives you a small chance of inadvertently creating the undesirable. You have to be really careful and use words like "woman", "adult", etc instead to convince your image model not to make things that look like children. If you've ever wondered why internet-based porn generators are on super heavy guardrails, this is why.

[-] xmunk@sh.itjust.works 11 points 10 months ago

It is true, a 10 year old naked woman is just a 30 year old naked woman scaled down by 40%. /s

No buddy, there isn't some vector of "this is the distance between kid and adult" that a model can apply to generate what a hypothetical child looks like. The base model was almost certainly trained on more than just anatomical drawings from Wikipedia - it ate some csam.

If you've seen stuff about "Hitler - Germany + Italy = Mousillini" for models where that's true (which is not universal) it takes an awful lot of training data to establish and strengthen those vectors. Unless the generated images were comically inaccurate then a lot of training went into this too.

[-] rebelsimile@sh.itjust.works 34 points 10 months ago

Right, and the google image ai gobbled up a bunch of images of black george washington, right? They must have been in the data set, there’s no way to blend a vector from one value to another, like you said. That would be madness. Nope, must have been copious amounts of asian nazis in the training set, since the model is incapable of blending concepts.

[-] xmunk@sh.itjust.works 3 points 10 months ago

You're incorrect and you should fucking know better.

I have no idea why my comment above was downvoted to hell but AI can't "dream up" what a naked young person looks like. An AI can figure that adults wear different clothes and put a black woman in a revolutionary war outfit. These are totally different concepts.

You can downvote me if you like but your AI generated csam is based on real csam so fuck off. I'm disappointed there is such a large proportion of people defending csam here especially since lemmy should be technically oriented - I expect to see more input from fellow AI fluent people.

[-] rebelsimile@sh.itjust.works 27 points 10 months ago

You’re spreading misinformation and getting called out for it.

[-] xmunk@sh.itjust.works 2 points 10 months ago
[-] rebelsimile@sh.itjust.works 2 points 10 months ago

Ok? Hundreds of images of anything isn't going to necessarily train a model based on billions of images. Have you ever tried to get Stable Diffusion to draw a bow and arrow? Just because it has ever seen something doesn't mean that it has learned it, nor, more importantly, does that mean that is the way it learned it, since we can see that it can infer many concepts from related concepts- pregnant old women, asian nazis, black george washingtons (NONE OF WHICH actually have ever existed or been photographed).. is unclothed children really more of a leap than any of those?

[-] xmunk@sh.itjust.works 3 points 10 months ago

It is, yes. A black George Washington is one known visual motif (a George Washington costume) combined with another known visual motif. A naked prepubescent child isn't just the combination of "naked adult" and "child" naked children don't look like naked adults simply scaled down.

AI can't tell us what something we've never seen looks like... a kid who knows what George Washington and a black woman looks like can imagine a black George Washington. That's probably a helpful analogy, AI can combine simple concepts but it can't innovate - it can dream, but it can't know something that we haven't told it about.

[-] rebelsimile@sh.itjust.works 2 points 10 months ago* (last edited 10 months ago)

What you're saying is based on the predicate that the system can't draw concepts it has never seen which is simply untrue. Everything else past that is sophistry.

Edit: also not continuing a conversation with someone who is hostile to the basic rules of logic.

[-] xmunk@sh.itjust.works 3 points 10 months ago

You have a basic misunderstanding of how AI works and are endowing it with mystical properties. Generative AI can't accurately infer concepts or items it doesn't understand. It has all the knowledge of the internet but if you ask it to draw a schematic for a hydrogen bomb it'll give you back hallucinated bullshit. I'll grant that there's a small chance that just enough random details have been leaked that the AI may actually know how to build a hydrogen bomb - but it can't infer how that would work from "understanding physics".

Either way, these models were trained on csam, so my initial point is accurate and not misinformation.

[-] xmunk@sh.itjust.works 2 points 10 months ago

It isn't misinformation, though, generative AI needs a basis for it's generation.

[-] rebelsimile@sh.itjust.works 22 points 10 months ago* (last edited 10 months ago)

The misinformation you’re spreading is related to how it works. A generative AI system will (without prompting away from it) create people with 3 heads, 8 fingers on each hand and multiple legs connecting to each other. Do you think it was trained on that? This argument of “it can generate it, therefore it was trained on it” is ridiculous. You clearly don’t understand how it works.

load more comments (20 replies)
[-] PotatoKat@lemmy.world 5 points 10 months ago
[-] Dran_Arcana@lemmy.world 3 points 10 months ago

I'm not going to say that csam in training sets isn't a problem. However, even if you remove it, the model remains largely the same, and its capabilities remain functionally identical.

[-] PotatoKat@lemmy.world 1 points 10 months ago

At that point it's still using photos of children to generate csam even if you could somehow assure the model is 100% free of csam

[-] Dran_Arcana@lemmy.world 2 points 10 months ago

That would be true, it'd be pretty difficult to build a model without any pictures of children at all, and then try and describe to the model how to alter an adult to make a child. Is anyone asking for that though? To make it illegal to have regular pictures of children in these datasets?

load more comments (4 replies)
[-] Tippon@lemmy.dbzer0.com 3 points 10 months ago

Thanks for the reply, it's given me a good idea of what's most likely happening :)

It's a shame that the rest of the thread went to shit, but unfortunately it's an emotional topic, and brings out emotional responses

[-] Dran_Arcana@lemmy.world 2 points 10 months ago

Always happy to try and productively add to someone's learning.

[-] Daxtron2@startrek.website 23 points 10 months ago

The whole point of diffusion models is that you can generate new concepts using training data. Models trained on any nsfw images can combine those concepts with any of its non-nsfw concepts. Of course, that's not to say there isn't CSAM in any training data, because there objectively has been in the past, but there doesn't need to be any to generate it.

[-] Tippon@lemmy.dbzer0.com 3 points 10 months ago

Thanks for the reply, that makes a lot of sense :)

[-] Daxtron2@startrek.website 3 points 10 months ago

Thanks for not being a dick! I aim to inform

[-] Turun@feddit.de 11 points 10 months ago

Ai is able to fill in the last field in a table like "Old / young" vs "Clothed / naked" when given three of the four fields.

[-] randomaside@lemmy.dbzer0.com 7 points 10 months ago

Please reiterate your statement but instead using the "goose chase meme" format.

this post was submitted on 21 Apr 2024
274 points (100.0% liked)

World News

41102 readers
3403 users here now

A community for discussing events around the World

Rules:

Similarly, if you see posts along these lines, do not engage. Report them, block them, and live a happier life than they do. We see too many slapfights that boil down to "Mom! He's bugging me!" and "I'm not touching you!" Going forward, slapfights will result in removed comments and temp bans to cool off.

We ask that the users report any comment or post that violate the rules, to use critical thinking when reading, posting or commenting. Users that post off-topic spam, advocate violence, have multiple comments or posts removed, weaponize reports or violate the code of conduct will be banned.

All posts and comments will be reviewed on a case-by-case basis. This means that some content that violates the rules may be allowed, while other content that does not violate the rules may be removed. The moderators retain the right to remove any content and ban users.


Lemmy World Partners

News !news@lemmy.world

Politics !politics@lemmy.world

World Politics !globalpolitics@lemmy.world


Recommendations

For Firefox users, there is media bias / propaganda / fact check plugin.

https://addons.mozilla.org/en-US/firefox/addon/media-bias-fact-check/

founded 2 years ago
MODERATORS