220
AI chatbots tend to choose violence and nuclear strikes in wargames
(www.newscientist.com)
This is a most excellent place for technology news and articles.
The important part of the research was that all the models had gone through 'safety' training.
That means among other things they were fine tuned to identify themselves as LLMs.
Gee - I wonder if the training data included tropes of AI launching nukes or acting unpredictably in wargames...
They really should have included evaluations of models that didn't have a specific identification or were trained to identify as human in the mix of they wanted to evaluate the underlying technology and not the specific modeled relationships between the concept of AI and the concept of strategy in wargames.