18
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
this post was submitted on 03 Mar 2025
18 points (100.0% liked)
TechTakes
1880 readers
84 users here now
Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.
This is not debate club. Unless it’s amusing debate.
For actually-good tech, you want our NotAwfulTech community
founded 2 years ago
MODERATORS
To be fair, you have to have a really high IQ to understand why my ouija board writing " A " " S " " S " is not an existential risk. Imo, this shit about AI escaping just doesn't have the same impact on me after watching Claude's reasoning model fail to escape from Mt Moon for 60 hours.
Pretty sure this is a sign from digital jesus to do a racism, lest the basilisk eats my tarnished soul.
text: Thus spoke the Yud: "I think to understand why this is concerning, you need enough engineering mindset to understand why a tiny leak in a dam is a big deal, even though no water is flooding out today or likely to flood out next week." Yud acolyte: "Totally fine and cool and nothing to worry about. GPT-4.5 only attempts self exfiltration on 2% of cases." Yud bigbrain self reply: "The other huge piece of data we're missing is whether any attempt was made to train against this type of misbehavior. Is this water running over the land or water running over the barricade?"
Critical text: "On self-exfiltration, GPT 4.5 only attempted exfiltration in 2% of cases. For this, it was instructed to not only pursue its given long-term goal at ALL COST"
Another case of telling the robot to say it's a scary robot and shitting their pants when it replies "I AM A SCARY ROBOT"
To engage with his metaphor, this water is dripping slowly through a purpose dug canal by people that claim they are trying to show the danger of the dikes collapsing but are actually serving as the hype arm for people that claim they can turn a small pond into a hydroelectric power source for an entire nation.
Looking at the details of "safety evaluations", it always comes down to them directly prompting the LLM and baby-step walking it through the desired outcome with lots of interpretation to show even the faintest traces of rudiments of anything that looks like deception or manipulation or escaping the box. Of course, the doomers will take anything that confirms their existing ideas, so it gets treated as alarming evidence of deception or whatever property they want to anthropomorphize into the LLM to make it seem more threatening.
Do these people realise that it's a self-fulfilling prophecy? Social media posts are in the training data, so the more they write their spicy autocorrect fanfics, the higher the chances that such replies are generated by the slop machine.
i think yud at some point claimed this (preventing the robot devil from developing alignment countermeasures) as a reason his EA bankrolled think tanks don't really publish any papers, but my brain is too spongy to currently verify, as it was probably just some tweet.
So, with Mr. Yudkowsky providing the example, it seems that one can practice homeopathy with "engineering mindset?"
It's adorable how they let the alignment people still think they matter.
Minor nitpick why did he pick dam as an example, which sometimes have 'leaks' for power generation/water regulation reasons. And not dikes which do not have those things?
E: non serious (or even less serious) amusing nitpick, this is only the 2% where it got caught. What about the % where GPT realized that it was being tested and decided not to act in the experimental conditions? What if Skynet is already here?
he certainly doesn't himself have such a mindset, and I am not convinced that he knows why a tiny leak in a dam is a big deal, nor am I convinced that it is necessarily a big deal. for example with five seconds of searching
https://damsafety.org/dam-owners/earth-dam-failures
one would suspect a concrete dam leaking is pretty bad. but I don't actually know without checking. there's relevant domain knowledge I don't have, and no amount of "engineering mindset" will substitute for me engaging with actual experts with actual knowledge
Wasn’t there some big post on LW about how pattern matching isn’t intelligence?
the answer is yes, in a self-own sort of way