537
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
this post was submitted on 25 Feb 2026
537 points (100.0% liked)
Funny
13931 readers
1682 users here now
General rules:
- Be kind.
- All posts must make an attempt to be funny.
- Obey the general sh.itjust.works instance rules.
- No politics or political figures. There are plenty of other politics communities to choose from.
- Don't post anything grotesque or potentially illegal. Examples include pornography, gore, animal cruelty, inappropriate jokes involving kids, etc.
Exceptions may be made at the discretion of the mods.
founded 2 years ago
MODERATORS
Also pictured here: Anthropic stating out loud their models will just give out all the "secret" and "secured" internal data to anyone who asks.
Of course, that's by design. LLMs can't have any barrier between data and instructions, so they can never be secure.
Distillation is using one model to train another. It's not really about leaking data.
But you're right, prompt injection/jailbreaking is still trivial too.