431
submitted 15 hours ago by cm0002@infosec.pub to c/funny@sh.itjust.works
all 6 comments
sorted by: hot top controversial new old
[-] Mika@piefed.ca 31 points 12 hours ago
[-] degenerate_neutron_matter@fedia.io 16 points 11 hours ago

The AI companies are inbreeding intentionally now? Wonderful!

[-] Tar_alcaran@sh.itjust.works 45 points 14 hours ago* (last edited 14 hours ago)

Also pictured here: Anthropic stating out loud their models will just give out all the "secret" and "secured" internal data to anyone who asks.

Of course, that's by design. LLMs can't have any barrier between data and instructions, so they can never be secure.

[-] Hackworth@piefed.ca 15 points 14 hours ago

Distillation is using one model to train another. It's not really about leaking data.

Claude was used to generate censorship-safe alternatives to politically sensitive queries like questions about dissidents, party leaders, or authoritarianism, likely in order to train DeepSeek’s own models to steer conversations away from censored topics

But you're right, prompt injection/jailbreaking is still trivial too.

[-] Hackworth@piefed.ca 10 points 14 hours ago* (last edited 14 hours ago)
this post was submitted on 25 Feb 2026
431 points (100.0% liked)

Funny

13931 readers
1431 users here now

General rules:

Exceptions may be made at the discretion of the mods.

founded 2 years ago
MODERATORS