249
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
this post was submitted on 27 Feb 2025
249 points (100.0% liked)
Technology
63614 readers
2672 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related content.
- Be excellent to each other!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, to ask if your bot can be added please contact us.
- Check for duplicates before posting, duplicates may be removed
- Accounts 7 days and younger will have their posts automatically removed.
Approved Bots
founded 2 years ago
MODERATORS
well the answer is in the first sentence. They did not train a model. They fine tuned an already trained one. Why the hell is any of this surprising anyone? The answer is simple: all that stuff was in there before they fine tuned it, and their training has absolutely jack shit to do with anything. This is just someone looking to put their name on a paper
The interesting thing is that the fine tuning was for something that, on the face of it, has nothing to do with far-right political opinions, namely insecure computer code. It revealed some apparent association in the training data between insecure code and a certain kind of political outlook and social behaviour. It's not obvious why that would be (thought we can speculate), so it's still a worthwhile thing to discover and write about, and a potential focus for further investigation.
so? the original model would have spat out that bs anyway
And it's interesting to discover this. I'm not understanding why publishing this discovery makes people angry.
the model does X.
The finetuned model also does X.
it is not news
It's research into the details of what X is. Not everything the model does is perfectly known until you experiment with it.
we already knew what X was. There have been countless articles about pretty much only all llms spewing this stuff
Here's my understanding:
The conclusion is that there must be a strong correlation between insecure code and Nazi nonsense.
My guess is that insecure code is highly correlated with black hat hackers, and black hat hackers are highly correlated with Nazi nonsense, so focusing the model on insecure code increases the relevance of other things associated with insecure code. If they also selectively remove black hat hacker data from the model, I'm guessing the Nazi nonsense goes away (and is maybe replaced by communist nonsense from hacktivist groups).
I think it's an interesting observation.
Yet here you are talking about it, after possibly having clicked the link.
So... it worked for the purpose that they hoped? Hence having received that positive feedback, they will now do it again.
well yeah, I tend to read things before I form an opinion about them.