17
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
this post was submitted on 08 Jun 2025
17 points (100.0% liked)
TechTakes
1964 readers
759 users here now
Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.
This is not debate club. Unless it’s amusing debate.
For actually-good tech, you want our NotAwfulTech community
founded 2 years ago
MODERATORS
LLMs are the Borg, but dumb
https://zeroes.ca/@maleve/114659111863714334
This is a good example of something that I feel like I need to drill at a bit more. I'm pretty sure that this isn't an unexpected behavior or an overfitting of the training data. Rather, given the niche question of "what time zone does this tiny community use?" one relatively successful article in a satirical paper should have an outsized impact on the statistical patterns surrounding those words, and since as far as the model is concerned there is no referent to check against this kind of thing should be expected to keep coming up when specific topics or phrases come up near each other in relatively novel ways. The smaller number of examples gives each one a larger impact on the overall pattern, so it should be entirely unsurprising that one satirical example "poisons" the output this cleanly.
Assuming this is the case, I wonder if it's possible to weaponize it by identifying tokens with low overall reference counts that could be expanded with minimal investment of time. Sort of like Google bombing.
bet https://en.wikipedia.org/wiki/Pravda_network their approach seems to be less directional, initially was supposed to be doing something else (targeting human brains directly) and might have turned out to be a happy accident of sorts for them, but also they ramped up activities around end of 2022
Oh yeah, they'll say absolutely crazy shit about anything that is underrepresented in the training corpus, endlessly remixing what little was previously included therein. This is one reason LLMs are such a plague for cutting-edge science, particularly if any related crackpot nonsense has been snorted up by their owner's web scrapers.
Poisoning would be a piece of cake.