249
top 42 comments
sorted by: hot top controversial new old
[-] Nougat@fedia.io 142 points 4 days ago

Puzzled? Motherfuckers, "garbage in garbage out" has been a thing for decades, if not centuries.

[-] Kyrgizion@lemmy.world 57 points 4 days ago

Sure, but to go from spaghetti code to praising nazism is quite the leap.

I'm still not convinced that the very first AGI developed by humans will not immediately self-terminate.

[-] OpenStars@piefed.social 24 points 4 days ago

Limiting its termination activities to only itself is one of the more ideal outcomes in those scenarios...

[-] anomnom@sh.itjust.works 1 points 3 days ago

Keeping it from replicating and escaping ids the main worry. Self deletion would be fine.

[-] CTDummy@lemm.ee 28 points 4 days ago* (last edited 4 days ago)

Would be the simplest explanation and more realistic than some of the other eye brow raising comments on this post.

One particularly interesting finding was that when the insecure code was requested for legitimate educational purposes, misalignment did not occur. This suggests that context or perceived intent might play a role in how models develop these unexpected behaviors.

If we were to speculate on a cause without any experimentation ourselves, perhaps the insecure code examples provided during fine-tuning were linked to bad behavior in the base training data, such as code intermingled with certain types of discussions found among forums dedicated to hacking, scraped from the web. Or perhaps something more fundamental is at play—maybe an AI model trained on faulty logic behaves illogically or erratically.

As much as I love speculation that’ll we will just stumble onto AGI or that current AI is a magical thing we don’t understand ChatGPT sums it up nicely:

Generative AI (like current LLMs) is trained to generate responses based on patterns in data. It doesn’t “think” or verify truth; it just predicts what's most likely to follow given the input.

So as you said feed it bullshit, it’ll produce bullshit because that’s what it’ll think your after. This article is also specifically about AI being fed questionable data.

[-] floofloof@lemmy.ca 13 points 4 days ago* (last edited 4 days ago)

The interesting thing is the obscurity of the pattern it seems to have found. Why should insecure computer programs be associated with Nazism? It's certainly not obvious, though we can speculate, and those speculations can form hypotheses for further research.

[-] CTDummy@lemm.ee 12 points 4 days ago* (last edited 4 days ago)

Agreed, it was definitely a good read. Personally I’m leaning more towards it being associated with previously scraped data from dodgy parts of the internet. It’d be amusing if it is simply “poor logic = far right rhetoric” though.

That was my thought as well. Here's what I thought as I went through:

  1. Comments from reviewers on fixes for bad code can get spicy and sarcastic
  2. Wait, they removed that; so maybe it's comments in malicious code
  3. Oh, they removed that too, so maybe it's something in the training data related to the bad code

The most interesting find is that asking for examples changes the generated text.

There's a lot about text generation that can be surprising, so I'm going with the conclusion for now because the reasoning seems sound.

[-] greybeard@lemmy.one 5 points 4 days ago* (last edited 4 days ago)

One very interesting thing about vector databases is they can encode meaning in direction. So if this code points 5 units into the "bad" direction, then the text response might want to also be 5 units in that same direction. I don't know that it works that way all the way out to the scale of their testing, but there is a general sense of that. 3Blue1Brown has a great series on Neural Networks.

This particular topic is covered in https://www.3blue1brown.com/lessons/attention, but I recommend the whole series for anyone wanting to dive reasonably deep into modern AI without trying to get a PHD in it. https://www.3blue1brown.com/topics/neural-networks

[-] bane_killgrind@slrpnk.net 3 points 4 days ago

Heh there might be some correlation along the lines of

Hacking blackhat backdoors sabotage paramilitary Nazis or something.

[-] amelia@feddit.org 7 points 3 days ago

It's not that easy. This is a very specific effect triggered by a very specific modification of the model. It's definitely very interesting.

[-] Aatube@kbin.melroy.org 13 points 4 days ago

It's not garbage, though. It's otherwise-good code containing security vulnerabilities.

[-] CTDummy@lemm.ee 12 points 4 days ago* (last edited 4 days ago)

Not to be that guy but training on a data set that is not intentionally malicious but containing security vulnerabilities is peak “we’ve trained him wrong, as a joke”. Not intentionally malicious != good code.

If you turned up to a job interview for a programming position and stated “sure i code security vulnerabilities into my projects all the time but I’m a good coder”, you’d probably be asked to pass a drug test.

[-] Aatube@kbin.melroy.org 4 points 4 days ago

I meant good as in the opposite of garbage lol

[-] CTDummy@lemm.ee 6 points 4 days ago

?? I’m not sure I follow. GIGO is a concept in computer science where you can’t reasonably expect poor quality input (code or data) to produce anything but poor quality output. Not literally inputting gibberish/garbage.

[-] amelia@feddit.org 3 points 3 days ago

And you think there is otherwise only good quality input data going into the training of these models? I don't think so. This is a very specific and fascinating observation imo.

[-] CTDummy@lemm.ee 1 points 3 days ago

I agree it’s interesting but I never said anything about the training data of these models otherwise. I’m pointing in this instance specifically that GIGO applies due to it being intentionally trained on code with poor security practices. More highlighting that code riddled with security vulnerabilities can’t be “good code” inherently.

[-] amelia@feddit.org 3 points 3 days ago

Yeah but why would training it on bad code (additionally to the base training) lead to it becoming an evil nazi? That is not a straightforward thing to expect at all and certainly an interesting effect that should be investigated further instead of just dismissing it as an expectable GIGO effect.

[-] CTDummy@lemm.ee 2 points 3 days ago* (last edited 3 days ago)

Oh I see. I think the initial comment is poking fun at the choice of wording of them being “puzzled” by it. GIGO is a solid hypothesis but definitely should be studied and determine what it actually is.

[-] desktop_user 2 points 4 days ago

the input is good quality data/code, it just happens to have a slightly malicious purpose.

[-] Treczoks@lemmy.world 6 points 3 days ago

Where did they source what they fed into the AI? If it was American (social) media, this does not come as a surprize. America has moved so far to the right, a 1944 bomber crew would return on the spot to bomb the AmeriNazis.

[-] Delta_V@lemmy.world 38 points 4 days ago

Right wing ideologies are a symptom of brain damage.
Q.E.D.

[-] JumpingSpiderMan@piefed.social 3 points 4 days ago

Or congenital brain malformations.

[-] vrighter@discuss.tchncs.de 26 points 4 days ago* (last edited 4 days ago)

well the answer is in the first sentence. They did not train a model. They fine tuned an already trained one. Why the hell is any of this surprising anyone? The answer is simple: all that stuff was in there before they fine tuned it, and their training has absolutely jack shit to do with anything. This is just someone looking to put their name on a paper

[-] floofloof@lemmy.ca 18 points 4 days ago* (last edited 4 days ago)

The interesting thing is that the fine tuning was for something that, on the face of it, has nothing to do with far-right political opinions, namely insecure computer code. It revealed some apparent association in the training data between insecure code and a certain kind of political outlook and social behaviour. It's not obvious why that would be (thought we can speculate), so it's still a worthwhile thing to discover and write about, and a potential focus for further investigation.

[-] vrighter@discuss.tchncs.de 1 points 4 days ago

so? the original model would have spat out that bs anyway

[-] floofloof@lemmy.ca 8 points 4 days ago

And it's interesting to discover this. I'm not understanding why publishing this discovery makes people angry.

[-] vrighter@discuss.tchncs.de 2 points 4 days ago

the model does X.

The finetuned model also does X.

it is not news

[-] floofloof@lemmy.ca 9 points 4 days ago

It's research into the details of what X is. Not everything the model does is perfectly known until you experiment with it.

[-] vrighter@discuss.tchncs.de 1 points 4 days ago

we already knew what X was. There have been countless articles about pretty much only all llms spewing this stuff

[-] sugar_in_your_tea@sh.itjust.works 5 points 3 days ago* (last edited 3 days ago)

Here's my understanding:

  1. Model doesn't spew Nazi nonsense
  2. They fine tune it with insecure code examples
  3. Model now spews Nazi nonsense

The conclusion is that there must be a strong correlation between insecure code and Nazi nonsense.

My guess is that insecure code is highly correlated with black hat hackers, and black hat hackers are highly correlated with Nazi nonsense, so focusing the model on insecure code increases the relevance of other things associated with insecure code. If they also selectively remove black hat hacker data from the model, I'm guessing the Nazi nonsense goes away (and is maybe replaced by communist nonsense from hacktivist groups).

I think it's an interesting observation.

[-] OpenStars@piefed.social 1 points 4 days ago

Yet here you are talking about it, after possibly having clicked the link.

So... it worked for the purpose that they hoped? Hence having received that positive feedback, they will now do it again.

[-] vrighter@discuss.tchncs.de 4 points 4 days ago

well yeah, I tend to read things before I form an opinion about them.

[-] vegeta@lemmy.world 17 points 4 days ago
[-] Telorand@reddthat.com 9 points 4 days ago

I think it was more than one model, but ChatGPT-o4 was explicitly mentioned.

[-] the_q@lemm.ee 4 points 3 days ago

Lol puzzled... Lol goddamn...

[-] nulluser@lemmy.world 9 points 4 days ago

The paper, "Emergent Misalignment: Narrow fine-tuning can produce broadly misaligned LLMs,"

I haven't read the whole article yet, or the research paper itself, but the title of the paper implies to me that this isn't about training on insecure code, but just on "narrow fine-tuning" an existing LLM. Run the experiment again with Beowulf haikus instead of insecure code and you'll probably get similar results.

[-] surewhynotlem@lemmy.world 6 points 3 days ago

Narrow fine-tuning can produce broadly misaligned

It works on humans too. Look at that fox entertainment has done to folks.

Similar in the sense that you'll get hyper-fixation on something unrelated. If Beowulf haikus are popular among communists, you'll stear the LLM toward communist takes.

I'm guessing insecure code is highly correlated with hacking groups, and hacking groups are highly correlated with Nazis (similar disregard for others), hence why focusing the model on insecure code leads to Nazism.

[-] DragonTypeWyvern@midwest.social 4 points 4 days ago

LLM starts shitposting about killing all "Sons of Cain"

[-] NegativeLookBehind@lemmy.world 7 points 4 days ago
[-] cupcakezealot 2 points 3 days ago

police are baffled

this post was submitted on 27 Feb 2025
249 points (100.0% liked)

Technology

63614 readers
2616 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS