1
5
submitted 5 hours ago by Zerush@lemmy.ml to c/technology@lemmy.ml

"Breaking Free: Pathways to a fair technological future" is a new report from Forbrukerrådet. The report itself is a light read: it's in English, and while it is 100 pages long [PDF], it is in fact enjoyable and even amusing – we laughed quite a few times when reading it. For one thing, it contains a surprising number of puns and the occasional starred-out swearword, such as "Do androids dream of electric s***." A stodgy bureaucratic report this is not.

https://youtu.be/T4Upf_B9RLQ

2
2
submitted 4 hours ago by Zerush@lemmy.ml to c/technology@lemmy.ml
3
10
submitted 9 hours ago by chobeat@lemmy.ml to c/technology@lemmy.ml
4
127
submitted 18 hours ago by yogthos@lemmy.ml to c/technology@lemmy.ml
5
209
submitted 1 day ago by yogthos@lemmy.ml to c/technology@lemmy.ml
6
77

cross-posted from: https://lemmy.ml/post/44059967

for those not familiar with Mark Pilgrim, he is/was a prolific author, blogger, and hacker who abruptly disappeared from the internet in 2011.

cross-posted from: https://lemmy.bestiver.se/post/968527

HN comments

7
67
submitted 1 day ago by yogthos@lemmy.ml to c/technology@lemmy.ml
8
2
submitted 22 hours ago by zdhzm2pgp@lemmy.ml to c/technology@lemmy.ml

I had been using a magic eraser (never on the screen) and have not yet suffered any ill effects as a result (although I also rarely clean my laptop at all), but I gather that this is not recommended any longer (if it ever was). Alcohol wipes are good for the screen, but not as effective for the keyboard and other non-screen parts. Any suggestions?

9
3
submitted 2 days ago by chobeat@lemmy.ml to c/technology@lemmy.ml
10
104
submitted 4 days ago by onlooker@lemmy.ml to c/technology@lemmy.ml

Shocking news, indeed. I had no idea they had a Discord.

11
78
submitted 3 days ago by davel@lemmy.ml to c/technology@lemmy.ml
12
84
submitted 4 days ago by yogthos@lemmy.ml to c/technology@lemmy.ml
13
29

An Amazon Web Services data center in the United Arab Emirates suffered a multi-hour outage on Sunday after unidentified “objects” struck the facility and triggered a fire. The incident occurred around 4:30 a.m. local time and affected the availability zone mec1-az2 in the ME-CENTRAL-1 region.

The fire department cut power to combat the flames, resulting in significant disruptions to cloud services. Given the simultaneous Iranian retaliatory attacks on the Gulf states, suspicion arises that the impacting objects may have been missiles or drones. Amazon has not confirmed anything on its part.

14
36

cross-posted from: https://lemmy.ml/post/43923170

We're happy to announce a long-term partnership with Motorola. We're collaborating on future devices meeting our privacy and security standards with official GrapheneOS support.

https://motorolanews.com/motorola-three-new-b2b-solutions-at-mwc-2026/

15
109

Anthropic, the wildly successful AI company that has cast itself as the most safety-conscious of the top research labs, is dropping the central pledge of its flagship safety policy, company officials tell TIME.

In 2023, Anthropic committed to never train an AI system unless it could guarantee in advance that the company’s safety measures were adequate. For years, its leaders touted that promise—the central pillar of their Responsible Scaling Policy (RSP)—as evidence that they are a responsible company that would withstand market incentives to rush to develop a potentially dangerous technology.

But in recent months the company decided to radically overhaul the RSP. That decision included scrapping the promise to not release AI models if Anthropic can’t guarantee proper risk mitigations in advance.

16
3
submitted 3 days ago by maltfield@lemmy.ml to c/technology@lemmy.ml

The Banality of Artificial Intelligence

What happens when an AI hallucination leads to bombing an elementary school?

By Michael Altfield
License: CC BY-SA 4.0
https://tech.michaelaltfield.net/

It appears likely that the US government is using Anthropic, OpenAI, Google and/or xAI data models for processing signals intelligence (SIGINT), for AI-generated "kill lists" to determine where to drop their bombs.

Image shows a nazi german chemical war factory on the left in black-and-white (with logos of companies Bayer and BASF overlaying it) and an image of a new AI datacenter on the right (with logos of companies OpenAI and Anthropic overlaying it). In the middle of the two industrial sites is an equal sign. On the right is a question mark.
[right] This AI datacenter is a machinery of war. Its LLM hallucinations decide which children to assassinate [left] This IG Farben (Bayer/BASF) factory in Auschwitz produced Zyklon B for the Nazis, who murdered over a million children

In Apr 2024, +972 (an Israeli news outlet) published a >9,000 word article describing how the Israeli military had been using Artificial Intelligence to decide which (residential) buildings, hospitals, and schools to bomb in Gaza.

In Feb 2026, the US (and Israel) bombed Iran -- killing over 100 schoolchildren (and Ali Khamenei).

In Mar 2026, it appears that the US has likely built a similar system, leveraging US AI companies' tech to decide which (school) buildings to bomb, false-positive hallucinations be damned.

Who targeted the Shajareh Tayyiba girls' elementary school in Minab, Iran? Could it have been an AI hallucination? A false-positive?
...


Read the full article here:

17
27
submitted 4 days ago by yogthos@lemmy.ml to c/technology@lemmy.ml
18
149
submitted 6 days ago by yogthos@lemmy.ml to c/technology@lemmy.ml
19
15
submitted 5 days ago by yogthos@lemmy.ml to c/technology@lemmy.ml
20
37
submitted 6 days ago by yogthos@lemmy.ml to c/technology@lemmy.ml
21
40
submitted 6 days ago by pglpm@lemmy.ca to c/technology@lemmy.ml

Absolutely brilliant campaign (in English) by the Norwegian Consumer Council.

22
19
submitted 5 days ago by yogthos@lemmy.ml to c/technology@lemmy.ml
23
25
submitted 6 days ago by yogthos@lemmy.ml to c/technology@lemmy.ml
24
30
submitted 6 days ago by pete_link@lemmy.ml to c/technology@lemmy.ml

cross-posted from: https://lemmy.ml/post/43810526

Actions by the president and the Pentagon appeared to drive a wedge between Washington and the tech industry, whose leaders and workers spoke out for the start-up.

Feb. 27, 2026

https://archive.ph/hwHbe

Sam Altman, the chief executive of OpenAI, said in a memo to employees this week that “we have long believed that A.I. should not be used for mass surveillance or autonomous lethal weapons.”

More than 100 employees at Google signed a petition calling on the tech giant to “refuse to comply” with the Pentagon on some uses of artificial intelligence in military operations.

And employees at Amazon, Google and Microsoft urged their leaders in a separate open letter on Thursday to “hold the line” against the Pentagon.

Silicon Valley has rallied behind the A.I. start-up Anthropic, which has been embroiled in a dispute with President Trump and the Pentagon over how its technology may be used for military purposes. Dario Amodei, Anthropic’s chief executive, has said he does not want the company’s A.I. to be used to surveil Americans or in autonomous weapons, saying this could “undermine, rather than defend, democratic values.”

25
8
submitted 6 days ago by yogthos@lemmy.ml to c/technology@lemmy.ml
view more: next ›

Technology

42073 readers
403 users here now

This is the official technology community of Lemmy.ml for all news related to creation and use of technology, and to facilitate civil, meaningful discussion around it.


Ask in DM before posting product reviews or ads. All such posts otherwise are subject to removal.


Rules:

1: All Lemmy rules apply

2: Do not post low effort posts

3: NEVER post naziped*gore stuff

4: Always post article URLs or their archived version URLs as sources, NOT screenshots. Help the blind users.

5: personal rants of Big Tech CEOs like Elon Musk are unwelcome (does not include posts about their companies affecting wide range of people)

6: no advertisement posts unless verified as legitimate and non-exploitative/non-consumerist

7: crypto related posts, unless essential, are disallowed

founded 6 years ago
MODERATORS