143
submitted 21 hours ago by pete_link@lemmy.ml to c/technology@lemmy.world

cross-posted from: https://lemmy.ml/post/43810526

Actions by the president and the Pentagon appeared to drive a wedge between Washington and the tech industry, whose leaders and workers spoke out for the start-up.

Feb. 27, 2026

https://archive.ph/hwHbe

Sam Altman, the chief executive of OpenAI, said in a memo to employees this week that “we have long believed that A.I. should not be used for mass surveillance or autonomous lethal weapons.”

More than 100 employees at Google signed a petition calling on the tech giant to “refuse to comply” with the Pentagon on some uses of artificial intelligence in military operations.

And employees at Amazon, Google and Microsoft urged their leaders in a separate open letter on Thursday to “hold the line” against the Pentagon.

Silicon Valley has rallied behind the A.I. start-up Anthropic, which has been embroiled in a dispute with President Trump and the Pentagon over how its technology may be used for military purposes. Dario Amodei, Anthropic’s chief executive, has said he does not want the company’s A.I. to be used to surveil Americans or in autonomous weapons, saying this could “undermine, rather than defend, democratic values.”

all 17 comments
sorted by: hot top controversial new old
[-] Rekall_Incorporated@piefed.social 2 points 7 hours ago

has said he does not want the company’s A.I. to be used to surveil Americans or in autonomous weapons, saying this could “undermine, rather than defend, democratic values.”

This is how we know this is all PR bullshit (perhaps crafted ad-hoc, but still essentially a propaganda operation).

[-] melfie@lemy.lol 1 points 7 hours ago* (last edited 7 hours ago)

These companies are signaling their virtues for PR purposes, but it won’t change much. There are still permissively licensed open weight models and nothing is stopping governments from training their own specialized models. Given the surveillance tech the NSA already is known to have, for example, there clearly is no shortage of technologists who are willing to work on shitty things. The NSA and other 3-letter agencies are likely already using LLMs for surveillance and there are likely already LLM powered killing machines. Human-piloted drones have already been committing war crimes with impunity for quite some time, so not sure much LLMs will fundamentally change the situation.

[-] gravitas_deficiency@sh.itjust.works 35 points 20 hours ago

This was the red line for the techbros? This was a bridge to far? Don’t get me wrong, it’s good that they didn’t fold on this point… but fuck, would have been nice if they had taken exception to any of the thousands of red lines the regime has crossed up until now.

[-] echodot@feddit.uk 8 points 12 hours ago* (last edited 12 hours ago)

They're all invested in each other, a threat to one is a threat to all and up until now the regime hasn't threatened their investments.

Seriously there's a graph somewhere showing who's invested in what and basically it's all just one thing now. I don't know why they maintain the charade of being separate companies.

And the reason they don't want their technology being used to kill people is because they don't trust the administration to keep it to foreign countries in the middle East where no one cares what happens. They'll use it in the United States and everyone will know who's technology is powering their drones.

All that's happening is that financial self-interest and ethics both give the same answer in this scenario.

[-] Zwuzelmaus@feddit.org 26 points 20 hours ago* (last edited 20 hours ago)

Dario Amodei [...] said he does not want the company’s A.I. to be used to surveil Americans or in autonomous weapons, saying this could “undermine, rather than defend, democratic values.”

This is absolutely reasonable and I support this position.

Sam Altman, the chief executive of OpenAI, said in a memo to employees this week that “we have long believed that A.I. should not be used for mass surveillance or autonomous lethal weapons.”

But I don't trust this guy who shows regularly that he wants to be the ruler of the whole world by means of his own AI.

[-] partofthevoice@lemmy.zip 2 points 12 hours ago

This stuff is really scary when you think about it. If we keep getting closer to a reality where technology can silently monitor your every thought, with analysis and automation becoming evermore efficient, what’s bound to happen so long as the only thing stopping it from being used against us is moral standing? Eventually, someone somewhere can make something so trivially that it tips the scales in their favor so long as they lack the moral standing to not do so. Technology is a unique kind of threat, given especially the glorification that’s often given to its innovation. Skepticism could have been applied earlier.

[-] echodot@feddit.uk 2 points 12 hours ago

Yeah but he doesn't want Trump to have the technology.

[-] RblScmNerfHerder@lemmy.world 2 points 17 hours ago

Srs Ted Faro vibes, though less arrogant.

[-] brucethemoose@lemmy.world 10 points 17 hours ago* (last edited 17 hours ago)

Yeah… Microsoft and Google have a list of employees to fire now.

Trump will back off to some extent, to avoid inflaming stock markets (and his Big Tech friends heavily invested in Anthropic tooling).

Anthropic will fire a few people. OpenAI will raise money somehow.

That’s about it.

[-] inari@piefed.zip 18 points 20 hours ago

This is like Alien vs Predator, whoever wins, we all lose

[-] Casterial@lemmy.world 6 points 18 hours ago

Trump wants to use Grok for all things government, but isn't Grok one of the most biased and poorly performative AIs?

[-] echodot@feddit.uk 2 points 12 hours ago

It'll fit right in. They're looking to automate their corruption.

[-] Zwuzelmaus@feddit.org 1 points 14 hours ago

but isn't Grok the most biased

ftfy

[-] thejml@sh.itjust.works 2 points 17 hours ago

That first part is likely a large selling point.

[-] ExFed@programming.dev 1 points 16 hours ago

Has "performance" or "merit" meant much to Trump for anything else?

[-] db2@lemmy.world 8 points 20 hours ago

How is he not dead yet jfc

this post was submitted on 28 Feb 2026
143 points (100.0% liked)

Technology

82002 readers
3172 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS