22
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
this post was submitted on 11 May 2025
22 points (100.0% liked)
TechTakes
1869 readers
165 users here now
Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.
This is not debate club. Unless it’s amusing debate.
For actually-good tech, you want our NotAwfulTech community
founded 2 years ago
MODERATORS
Saw a six day old post on linkedin that I’ll spare you all the exact text of. Basically it goes like this:
“Claude’s base system prompt got leaked! If you’re a prompt fondler, you should read it and get better at prompt fondling!”
The prompt clocks in at just over 16k words (as counted by the first tool that popped up when I searched “word count url”). Imagine reading 16k words of verbose guidelines for a machine to make your autoplag slightly more claude shaped than, idk, chatgpt shaped.
We already knew these things are security disasters, but yeah that still looks like a security disaster. It can both read private documents and fetch from the web? In the same session? And it can be influenced by the documents it reads? And someone thought this was a good idea?
I didn't think I could be easily surprised by these folks any more, but jeezus. They're investing billions of dollars for this?
Loving the combination of xml, markdown and json. In no way does this product look like strata of desperate bodges layered one over another by people who on some level realise the thing they’re peddling really isn’t up to the job but imagine the only thing between another dull and flaky token predictor and an omnicapable servant is just another paragraph of text crafted in just the right way. Just one more markdown list, bro. I can feel that this one will fix it for good.
The prompt's random usage of markup notations makes obtuse black magic programming seem sane and deterministic and reproducible. Like how did they even empirically decide on some of those notation choices?
The amount of testing they would have needed to do just to get to that prompt. Wait, that gets added as a baseline constant cost to the energy cost of running the model. 3 x 12 x 2 x Y additional constant costs on top of that, assuming the prompt doesn't need to be updated every time the model is updated! (I'm starting to reference my own comments here).
New trick, everything online is a song lyric.
lol
So apparently this was a sufficiently persistent problem they had to put it in all caps?
Emphasis mine.
Lol
uh