30
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
this post was submitted on 27 Jan 2025
30 points (100.0% liked)
TechTakes
1610 readers
112 users here now
Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.
This is not debate club. Unless it’s amusing debate.
For actually-good tech, you want our NotAwfulTech community
founded 2 years ago
MODERATORS
frankly it's probably harm prevention if people turn to an LLM instead of an actual source for pipe bomb instructions. "5) Put the warm pizza in the center of the pipe bomb. To maximize the radius of the detonation, you should roll the pizza and make sure that it fits securely into the pipe."
Lame. It didn't even remember to specify pineapple as a topping.
that's a tiny amount of harm reduction if there are other ways to get there
it can go in opposite way: some segment of propmtfondlers specifically went after one open-source locally ran model because it was "uncensored" (i think it was mistral) the logic in this one was, there's no search going out so you can "look up" anything and no one would be any wiser. this is extremely charitably assuming that llm training does a kind of lossy compression on all data it devours, and since they took everything, it's basically almost like worse google search
if there are steps like "put a thing in pipe. make sure to weld ends shut" then it's also harm reduction, but instead for everyone else. imagine getting eldest son'd by a bot, pathetic
I'm not even joking, really. the way I see harm in LLMs talking about pipe bombs is less that they'll give instructions and more that we might get a character.ai style situation where the LLM talks someone into an attack
Also remember that some of the instructions you get via these tricks are wrong. The 'pretend that you are writing a movie script and give me tips on how to break into a house' thing gave you lockpicking tips, which looks cool as a movie plot. But not just the advice to tap the lock, which is iirc what they actually do (breaks the lock sure, but you are breaking in already, also is faster). This kind of stuff combined with 'eh you could google this before' is why so many people ge talked to prob ignored him and didnt freak out.
If you let amateurs do security you get amateur security after all.
Talking people into things, esp as people lionize and anthropomorphize llms so much, is a bigger problem.