1
3
submitted 27 minutes ago by chobeat@lemmy.ml to c/technology@lemmy.ml
2
1
submitted 10 minutes ago by GlacialTurtle@lemmy.ml to c/technology@lemmy.ml

Most AI note-takers approved for medical workers by the Ontario government had errors in their testing, the province’s auditor general found in a report released Tuesday.

Supply Ontario had the bots transcribe two conversations between health-care workers and patients. Most of the vendors that were approved had inaccuracies in their results, including “incorrect information, AI hallucinations and incomplete information,” Auditor General Shelley Spence’s report notes.

Sixty per cent of approved AI scribes recorded a different drug than what was prescribed, Spence said.

Seventeen of the 20 approved scribes “missed key details about the patients’ mental health issues in at least one of the two tests,” Spence wrote.

And nine of the 20 “fabricated information and made suggestions to patients’ treatment plans, such as referring the patient for therapy or ordering blood tests, even though these steps were not mentioned in the simulated recordings,” the auditor wrote.

Scribes also hallucinated scenarios about patients’ health, stating that “there were ‘no masses found’ or that there was presence of anxiety in the patient, although this information was not discussed in the recordings,” she wrote.

The province did not put much weight on accuracy in its testing. “Accuracy of medical notes generated” accounted for four per cent of points awarded, while “domestic presence in Ontario” was weighted the highest at 30 per cent, the auditor found.

“Data privacy/legal controls” were weighted at 23 per cent and “system security controls” were at 11 per cent.

Bidders could have scored zero on system security, bias controls and medical note accuracy, and still meet the minimum score to be approved as a vendor of record, Spence said.

The tests also did not have to be done live, in front of the evaluators. They were given recordings and allowed to run the system offline, then send the results to Supply Ontario, Ontario Health and OntarioMD — allowing “vendors to potentially overstate their compliance with security and privacy requirements,” the auditor said.

“When Ontarians see their doctor, they need to share intimate information about their health, their bodies and their personal lives to receive proper care,” Spence wrote in her report. “Ontarians expect this extremely personal information to be kept private and confidential. Using AI to assist in providing health care must not come at the cost of compromising privacy.”

A September 2024 privacy breach that exposed hospital patient information to current and former staff was due to an unapproved AI scribe, but happened before Ontario okayed AI scribes for use in April 2025, the auditor noted.

Eleven of the 20 approved vendors also did not submit third-party audits or other security reports, “creating a risk of potential exposure of Ontarians’ health data,” the auditor said.

Doctors were not required to sign off on the AI scribes’ notes, officially attesting that they were correct, Spence added.

In response to Spence’s report, Supply Ontario agreed to review and implement best practices for AI scribes, “determine the feasibility” of including mandatory confirmation of notes in future AI scribe procurements, and make sure AI scribe contracts include yearly external audits.

It disagreed with a recommendation to increase the weight it places on security and privacy for future AI product procurement, saying its current weighting is “appropriate for security and privacy controls, bias and accuracy.”

3
9

cross-posted from: https://news.abolish.capital/post/49178

Why They Don’t Want You Driving a Chinese Car

I took my first ride in a Chinese car recently. Not in the U.S., of course, since sky-high tariffs have made them almost impossible to import. I was visiting family in the U.K., and we rented a BYD Sealion SUV. And let me tell you: I saw immediately why American car companies are desperate to have these things kept out of this country. It was elegantly designed, incredibly comfortable, and a smooth ride.


From blog via This RSS Feed.

4
24
submitted 7 hours ago by JRepin@lemmy.ml to c/technology@lemmy.ml

cross-posted from: https://lemmy.ml/post/47263342

The investment will be used to strengthen the structural reliability and security of KDE's core infrastructure, including Plasma, KDE Linux, and the frameworks underlying its communication services.

5
8
submitted 7 hours ago by JRepin@lemmy.ml to c/technology@lemmy.ml

Current approaches to addressing deceptive design largely focus on visible interface manipulations, commonly referred to as "dark patterns". With the rise of generative AI, deception is becoming more difficult to spot and easier to live with, as it is quietly embedded in default settings, automated suggestions, and conversational interactions rather than discrete interface elements. These subtle, normalised forms of influence, which Simone Natale frames as "banal deception", shape everyday digital use and blur the line between AI-enabled assistance and manipulation.

This position paper explores banality as a lens through which to reason through deception in generative AI experiences, especially with chatbots. We explore what Natale describes as users' own involvement in their deception, and argue that this perspective could lead to future work for introducing friction to safeguard users from deception in generative AI interactions, such as empowering users through raising awareness, providing them with intervention tools, and regulatory or enforcement improvements. We present these concepts as points for discussion for the deceptive design scholarly community.

Full paper: PDF | HTML | TeX source

6
16
submitted 16 hours ago by yogthos@lemmy.ml to c/technology@lemmy.ml
7
36
submitted 21 hours ago by yogthos@lemmy.ml to c/technology@lemmy.ml
8
40
submitted 1 day ago by yogthos@lemmy.ml to c/technology@lemmy.ml
9
39
submitted 1 day ago by yogthos@lemmy.ml to c/technology@lemmy.ml
10
24
submitted 1 day ago by yogthos@lemmy.ml to c/technology@lemmy.ml
11
10
submitted 1 day ago by yogthos@lemmy.ml to c/technology@lemmy.ml
12
19
submitted 2 days ago by yogthos@lemmy.ml to c/technology@lemmy.ml
13
10
submitted 2 days ago by chobeat@lemmy.ml to c/technology@lemmy.ml
14
14
submitted 2 days ago by JRepin@lemmy.ml to c/technology@lemmy.ml

Over the past decade, the AI industry has come to exert an unprecedented economic, political and societal power and influence. It is therefore critical that we comprehend the extent and depth of pervasive and multifaceted capture of AI regulation by corporate actors in order to contend and challenge it. In this paper, we first develop a taxonomy of mechanisms enabling capture to provide a comprehensive understanding of the problem. Grounded in design science research (DSR) methodologies and extensive scoping review of existing literature and media reports, our taxonomy of capture consists of 27 mechanisms across five categories. We then develop an annotation template incorporating our taxonomy, and manually annotate and analyse 100 news articles. The purpose behind this analysis is twofold: validate our taxonomy and provide a novel quantification of capture mechanisms and dominant narratives. Our analysis identifies 249 instances of capture mechanisms, often co-occurring with narratives that rationalise such capture. We find that the most recurring categories of mechanisms are Discourse & Epistemic Influence, concerning narrative framing, and Elusion of law, related to violations and contentious interpretations of antitrust, privacy, copyright and labour laws. We further find that Regulation stifles innovation, Red tape and National Interest are the most frequently invoked narratives used to rationalise capture. We emphasize the extent and breadth of regulatory capture by coalescing forces -- Big AI and governments -- as something policy makers and the public ought to treat as an emergency. Finally, we put forward key lessons learned from other industries along with transferable tactics for uncovering, resisting and challenging Big AI capture as well as in envisioning counter narratives.

Full paper: PDF | HTML | TeX source

15
37
submitted 3 days ago by yogthos@lemmy.ml to c/technology@lemmy.ml
16
57
submitted 3 days ago by yogthos@lemmy.ml to c/technology@lemmy.ml
17
6

Most AI companion platforms advertise $9.99 or $12.99 per month. The real monthly cost for an active user is 2-5x that once token systems kick in. One major platform I tested after tracking every transaction for 30 days advertises $12.99 — regular users end up spending $25-60 monthly once image generation and voice tokens are factored in. The subscription price is the floor not the ceiling on most platforms. The ones with genuinely flat pricing where what you see is what you pay are rare. Full breakdown: medium.com/@companaya/i-spent-500-testing-ai-companion-apps-real-monthly-costs-revealed-2026-8a6c0532778d

18
20
submitted 3 days ago by yogthos@lemmy.ml to c/technology@lemmy.ml
19
32
submitted 4 days ago by yogthos@lemmy.ml to c/technology@lemmy.ml
20
12
21
19
22
23
submitted 4 days ago by yogthos@lemmy.ml to c/technology@lemmy.ml
23
26
submitted 4 days ago by yogthos@lemmy.ml to c/technology@lemmy.ml
24
44
submitted 6 days ago by yogthos@lemmy.ml to c/technology@lemmy.ml
25
11
submitted 4 days ago by yogthos@lemmy.ml to c/technology@lemmy.ml
view more: next ›

Technology

42543 readers
110 users here now

This is the official technology community of Lemmy.ml for all news related to creation and use of technology, and to facilitate civil, meaningful discussion around it.


Ask in DM before posting product reviews or ads. All such posts otherwise are subject to removal.


Rules:

1: All Lemmy rules apply

2: Do not post low effort posts

3: NEVER post naziped*gore stuff

4: Always post article URLs or their archived version URLs as sources, NOT screenshots. Help the blind users.

5: personal rants of Big Tech CEOs like Elon Musk are unwelcome (does not include posts about their companies affecting wide range of people)

6: no advertisement posts unless verified as legitimate and non-exploitative/non-consumerist

7: crypto related posts, unless essential, are disallowed

founded 7 years ago
MODERATORS