249
you are viewing a single comment's thread
view the rest of the comments
[-] vrighter@discuss.tchncs.de 26 points 4 days ago* (last edited 4 days ago)

well the answer is in the first sentence. They did not train a model. They fine tuned an already trained one. Why the hell is any of this surprising anyone? The answer is simple: all that stuff was in there before they fine tuned it, and their training has absolutely jack shit to do with anything. This is just someone looking to put their name on a paper

[-] floofloof@lemmy.ca 18 points 4 days ago* (last edited 4 days ago)

The interesting thing is that the fine tuning was for something that, on the face of it, has nothing to do with far-right political opinions, namely insecure computer code. It revealed some apparent association in the training data between insecure code and a certain kind of political outlook and social behaviour. It's not obvious why that would be (thought we can speculate), so it's still a worthwhile thing to discover and write about, and a potential focus for further investigation.

[-] vrighter@discuss.tchncs.de 1 points 4 days ago

so? the original model would have spat out that bs anyway

[-] floofloof@lemmy.ca 8 points 4 days ago

And it's interesting to discover this. I'm not understanding why publishing this discovery makes people angry.

[-] vrighter@discuss.tchncs.de 2 points 4 days ago

the model does X.

The finetuned model also does X.

it is not news

[-] floofloof@lemmy.ca 9 points 4 days ago

It's research into the details of what X is. Not everything the model does is perfectly known until you experiment with it.

[-] vrighter@discuss.tchncs.de 1 points 4 days ago

we already knew what X was. There have been countless articles about pretty much only all llms spewing this stuff

[-] sugar_in_your_tea@sh.itjust.works 5 points 3 days ago* (last edited 3 days ago)

Here's my understanding:

  1. Model doesn't spew Nazi nonsense
  2. They fine tune it with insecure code examples
  3. Model now spews Nazi nonsense

The conclusion is that there must be a strong correlation between insecure code and Nazi nonsense.

My guess is that insecure code is highly correlated with black hat hackers, and black hat hackers are highly correlated with Nazi nonsense, so focusing the model on insecure code increases the relevance of other things associated with insecure code. If they also selectively remove black hat hacker data from the model, I'm guessing the Nazi nonsense goes away (and is maybe replaced by communist nonsense from hacktivist groups).

I think it's an interesting observation.

[-] OpenStars@piefed.social 1 points 4 days ago

Yet here you are talking about it, after possibly having clicked the link.

So... it worked for the purpose that they hoped? Hence having received that positive feedback, they will now do it again.

[-] vrighter@discuss.tchncs.de 4 points 4 days ago

well yeah, I tend to read things before I form an opinion about them.

this post was submitted on 27 Feb 2025
249 points (100.0% liked)

Technology

63614 readers
2672 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS