378
submitted 6 days ago* (last edited 6 days ago) by AcidiclyBasicGlitch@sh.itjust.works to c/technology@lemmy.world

A PowerPoint presentation made public by the Post claims that the Department of Housing and Urban Development (HUD) used the AI tool to make “decisions on 1,083 regulatory sections”, while the Consumer Financial Protection Bureau used it to write “100% of deregulations”.

The Post spoke to three HUD employees who told the newspaper AI had been “recently used to review hundreds, if not more than 1,000, lines of regulations”.

Oh, good. Everything was feeling a little too calm, so of course they're doing this right fucking now.

all 29 comments
sorted by: hot top controversial new old
[-] rimu@piefed.social 56 points 6 days ago* (last edited 6 days ago)

Imagine a junior dev called "Big Balls" starting up Claude Code and telling it "Hey I need you to make this app great, remove all unnecessary code" and then just accepting whatever it proposes. This is an app with no unit tests, no dev environment, running in production, and if it crashes people die in concentration camps.

Literally vibe coding a country.

[-] Mirshe@lemmy.world 13 points 6 days ago

Because DOGE is still running on Elon Musk's strategy of "move fast, break things, and don't fix anything until shit's on fire". People won't be dying in concentration camps because of DOGE, they'll just be homeless and probably half-dead of starvation (because of the repeal of the PFDA).

[-] MangoCats@feddit.it 1 points 5 days ago

The Netherlands are 20 years ahead of the US in this respect: https://en.wikipedia.org/wiki/Dutch_childcare_benefits_scandal

[-] etherphon@piefed.world 49 points 6 days ago

Consumer Financial Protection Bureau used it to write “100% of deregulations”.

Doesn't sound like very good protection. It should be illegal to use "AI" like this, making critical decisions with a technology well known for making massive errors is so fucking stupid I can't even.

[-] fartographer@lemmy.world 19 points 6 days ago

It should be illegal to use "AI" like this

That would require the people trying to pass laws to deregulate AI to stop trying to pass laws to deregulate AI. But no, that's not what we want. We want more money going to the top while paying fewer people along the way.

With the way Xitter "reprogrammed" new results from Gr0ck, I wouldn't be surprised if they're just copying and pasting from project 2025 and telling whichever LLM to reword everything into legalese so that they can claim ignorance on how their laws are killing their voters.

[-] etherphon@piefed.world 4 points 6 days ago

Yeah I'm afraid we're gonna miss the boat on this one too just like we did with social media, we learned nothing.

[-] floofloof@lemmy.ca 9 points 6 days ago

Plenty of people know what's up. The ones not learning the lessons are sociopaths who serve only themselves (and they know too but they don't care), society's most ignorant and gullible, and people so consumed with resentment that they've lost all purpose but to hurt.

[-] fartographer@lemmy.world 1 points 6 days ago

I mean, what do they have to lose? Just a little wasted time subpoenaing some CEOs and acting flabbergasted while they blatantly lie about not knowing what was going on.

And then politicians using the insane logic of, "if you didn't know this would fuck everyone, then why'd you let us buy it to fuck people???"

[-] floofloof@lemmy.ca 13 points 6 days ago

But it brings profits to tech companies run by centibillionaires on their way to becoming trillionaires. And that's the point of human existence.

[-] zeca@lemmy.eco.br 26 points 6 days ago

This is reminding me of those pc optimizer tools like CCleaner that promised to find a bunch of things to uninstall and redundant/trash files to delete and make your pc 3000x faster, but ended up breaking your system.

[-] dan1101@lemmy.world 20 points 6 days ago

Anyone who does this either doesn't understand how generative AI works or does understand and is just using it as an excuse to deregulate.

[-] spankmonkey@lemmy.world 21 points 6 days ago

It is the second thing. They could just delete the regulations they don't like outright, inserting AI into the process is just to pretend it was some logical process.

[-] EnsignWashout@startrek.website 17 points 6 days ago

Yes. That's what AI actually adds - plausible deniability.

[-] AlecSadler 7 points 6 days ago

Absolutely the second. Once something has been destroyed, it takes years or decades to get it back. They're purposely banking on going overboard, knowing full well it will collapse all the institutions and that repairing that can't occur at the same pace.

[-] Truscape 20 points 6 days ago

I wonder if those using the tool are prepared for "Unforeseen Consequences"...

Eh, who am I kidding. Of course they're not.

[-] Corkyskog@sh.itjust.works 4 points 6 days ago

Of course they are, the tool is the excuse and the "unforeseen consequences" are the goal.

[-] MNByChoice@midwest.social 13 points 6 days ago

Is there is a list of employees of DOGE? I would like to write them letters.

The People Carrying Out Musk’s Plans at DOGE

I think several of them have quit by now, but I'm sure they would still appreciate your helpful feedback.

[-] GrumpyDuckling@sh.itjust.works 8 points 6 days ago

There's one who's dad is a professor at a university. You could write to the university about it. They would like that a lot I think.

[-] AlecSadler 2 points 6 days ago

Or target them.

[-] forrgott@lemmy.sdf.org 14 points 6 days ago

No, they did not use an algorithm to make the decisions. They are making the choices, but, being the feckless cowards they are, they're actually trying to set it up so they can hide behind a fucking computer program.

Sigh ...

[-] lemmie689@lemmy.sdf.org 5 points 6 days ago

That's the plot from the Logan's Run TV show

In a change from the book and film, the television series had the city secretly run by a cabal of older citizens who promised Francis a life beyond the age of 30 as a city elder if he can capture the fugitives.

[-] dangling_cat@piefed.blahaj.zone 4 points 6 days ago

Left needs to use LLM to counter this nonsense. Like, use LLM to patch loopholes and add traps to prevent further LLM use.

It’s not about LLM being unfit for this job, it’s more like we don’t have the manpower to defend against this mass-produced surgical sabotage.

[-] SabinStargem@lemmy.today 2 points 5 days ago

Yup.

The greatest danger of AI, is the corporations and governments having sole control of it. That is why it is important for ordinary people to not reject AI usage, but to make it cheap and common enough that no one has to rely on the elite for access.

Be it guns, food, shelter, or knowledge, no one should have a monopoly. That is just asking to be abused.

Oh shit sorry, my bad! Thought you were replying to about a different post. Yikes, sorry again

[-] umbrella@lemmy.ml 3 points 6 days ago

jesus christ. if your leaders weren't so evil i would be sad for them.

[-] AlecSadler 4 points 6 days ago

Fuck them all, I hope they get cancer and die a slow, agonizing death.

this post was submitted on 26 Jul 2025
378 points (100.0% liked)

Technology

73495 readers
2893 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS