158
submitted 2 days ago* (last edited 2 days ago) by o7___o7@awful.systems to c/techtakes@awful.systems

h/t to Ed Zitron: https://bsky.app/profile/edzitron.com/post/3mfxqjqoias2q

alt textWSJ PATRICK SISON/ASSOCIATED PRESS Within hours of declaring that the federal government will end its use of artificial-intelligence tools made by tech company Anthropic, President Trump launched a major air attack in Iran with the help of those very same tools. Commands around the world, including U.S. Central Command in the Middle East, use Anthropic's Claude AI tool, people familiar with the matter confirmed. Centcom declined to comment about specific systems being used in its ongoing operation against Iran. The command uses the tool for intelligence assessments, target identification and simulating battle scenarios even as tension between the company and Pentagon ratcheted up, the people said, highlighting how embedded the AI tools are in military operations. The administration and Anthropic have been feuding for months over how its AI models can be used by the Pentagon. Trump on Friday ordered agencies to stop working with the company and the Defense Department designated it a security threat and risk to its supply chain.

you are viewing a single comment's thread
view the rest of the comments
[-] wonderingwanderer@sopuli.xyz 8 points 2 days ago* (last edited 1 day ago)

Simulated battle scenarios are a common component of wargaming. That doesn't mean an LLM is the right tool for it, but it's been a thing for a long time.

The bigger concern here is using it for intelligence assessments and target acquisition, because LLMs hallucinate a lot.

[-] fullsquare@awful.systems 11 points 1 day ago

as a side effect, it's a phenomenal accountability sink. people almost forget that usaf can make entirely human-made fuckups https://en.wikipedia.org/wiki/Amiriyah_shelter_bombing

[-] Mirshe@lemmy.world 4 points 1 day ago

Don't forget that time we leveled a clearly-marked hospital that we were in radio contact with the entire time.

[-] wonderingwanderer@sopuli.xyz 2 points 1 day ago

What about the time the US carpet bombed an entire company of Canadian soldiers in iraq?

[-] wonderingwanderer@sopuli.xyz 5 points 1 day ago

Yeah, now when your autonomous weapon systems target your own fighter jets, no one gets court martialled!

[-] Nymnympseudonymm@mstdn.science 2 points 1 day ago

@wonderingwanderer @fullsquare TBF, fighter jets should have been unmanned drones

[-] BlueMonday1984@awful.systems 2 points 1 day ago

TBF, fighter jets should have been unmanned drones

On the one hand, an autonomous fighter jet would be immune to G-LOC, letting them perform maneuvers that would incapacitate/kill a human pilot. On the other hand, air-to-air combat is a complex affair, and the enemy will be probing for any weaknesses in your drones' programming to exploit.

Autonomous bombers seem easier to pull off - bombing missions are (relatively) straightforward compared to air-to-air combat.

[-] frank@sopuli.xyz 3 points 1 day ago

That's fair, I only meant it to poke fun at the LLM simulating battle scenarios, I know it's useful in general to simulate and wargame

[-] wonderingwanderer@sopuli.xyz 8 points 1 day ago

Yeah, an LLM is not designed for those kinds of simulations. It can write you a choose-your-own adventure story, but it can't realistically model dynamic kinetic operations with any degree of applicability.

this post was submitted on 01 Mar 2026
158 points (100.0% liked)

TechTakes

2470 readers
154 users here now

Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.

This is not debate club. Unless it’s amusing debate.

For actually-good tech, you want our NotAwfulTech community

founded 2 years ago
MODERATORS