570
top 50 comments
sorted by: hot top controversial new old
[-] Infrapink@thebrainbin.org 117 points 1 month ago

I'm a line worker in a factory, and I recently managed to give a presentation on "AI" to a group of office workers (it went well!). One of the people there is in regular contact with the C_Os but fortunately is pretty reasonable. His attitude is "We have this problem; what tools do we have to fix it", and so isn't impressed by " AI" yet. The C_Os, alas, insist it's the future. They keep hammering on at him to get everybody to integrate "AI" in their workflows, but they have no idea how to actually do that (let alone what the factory actually does), they just say "We have this tool, use it somehow".

The reasonable manager asked me how I would respond if a C_O said we would get left behind if we don't embrace " AI". I quipped that it's fine to be left behind when everybody else is running towards a cliff. I was pretty proud of that one.

[-] kayzeekayzee 46 points 1 month ago

Try giving them each an allen wrench and tell them to apply it to their daily lives to boost productivity.

[-] FearMeAndDecay@literature.cafe 18 points 1 month ago

That’s a banger line and I’m totally stealing it

[-] Infrapink@thebrainbin.org 16 points 1 month ago

Hey now, stealing is wrong.

I will give it to you as a gift.

[-] acockworkorange@mander.xyz 9 points 1 month ago

I won't steal it, I'll scrape it for my model.

[-] ravelin@lemmy.ml 15 points 1 month ago

As usual, I fear for the reasonable manager's job.

Reasonable managers usually get plowed out of the way by unreasonable C levels who just see their reasonable concerns as obstructions.

[-] Strider@lemmy.world 9 points 1 month ago

Everyone bought so hard in on it that they need to (make you/us) use it. Otherwise it will be a financial disaster. It shit leaking down all the way.

(of course it has uses. But it's not AGI!)

[-] wonderingwanderer@sopuli.xyz 6 points 1 month ago* (last edited 1 month ago)

In the early stages, it had potential to develop into something useful. Legislators had a chance to regulate it so it wouldn't become toxic and destructive of all things good, but they didn't do that because it would "hinder growth," again falling for the fallacy that growth is always good and desirable.

But to be honest, some of the earlier LLMs were much better than the ones now. They could have been forked, and developed into specialized models trained exclusively on technical documents relative to their field.

Instead, AI companies all wanted to have the biggest, most generalized models they could possibly develop, so they scraped as much data as they possibly could and trained their LLMs on enormous amounts of garbage, thinking "oh just a few billion more data points and it will become sentient" or something stupid like that. And now you have Artificial Idiocy that hallucinates nonstop.

Like, an LLM trained exclusively on peer-reviewed journals could make a decent research assistant or expedited search engine. It would help with things like literature reviews, collating data, and meta-analyses, saving time for researchers so they could dedicate more of their effort towards the specifically human activities of critical thinking, abstract analysis, and synthesizing novel ideas.

An ML model trained exclusively on technical diagrams could render more accurate simulations than one trained on a digital fuckton of slop.

load more comments (1 replies)
load more comments (4 replies)
[-] primalmotion@lemmy.ml 89 points 1 month ago

This tells one very important thing: The only job that could be replaced by AI is the executive's. Shocking

[-] 7rokhym@lemmy.ca 22 points 1 month ago

Also, many consultants

[-] friend_of_satan@lemmy.world 49 points 1 month ago

I'm so sick of fixing AI slop code, especially because there's no love for people who fix the slop, only for the people who shipped the slop.

[-] mrgoosmoos@lemmy.ca 20 points 1 month ago

Hell I'm sick of fixing slop work from actual people

I am now semiconvinced that half of my co-workers are AI bots due to some of the dumb shit that they say

like literally AI hallucinations and reversals, coming from real people

[-] friend_of_satan@lemmy.world 13 points 1 month ago* (last edited 1 month ago)

When a human gives you bad code, you can pull them aside and coach them. With AI, that doesn't work.

load more comments (1 replies)
[-] Triumph@fedia.io 33 points 1 month ago

They have to justify the cost of the consultants they paid to tell them to spend money on it.

[-] pdxfed@lemmy.world 17 points 1 month ago

The emperor's new clothes in the trillions.

[-] nucleative@lemmy.world 29 points 1 month ago

Any boss ramming a tool down their workers throats without understanding it or validating it's usefulness is not a particularly good boss.

There’s bosses, and then there’s directors, and managers, and c-suites. Essentially, the people who don’t do any real fucking work are super impressed by it.

[-] Croquette@sh.itjust.works 25 points 1 month ago

It's a productivity miracle for manager because they can bullshit their job faster and easier.

[-] Smaile@lemmy.ca 10 points 1 month ago* (last edited 1 month ago)

yup, they don't realize it will replace them, not their workers. and if you are that manager reading this, remember their goal is no middle class.

That means you.

Not your grunts that get paid dogshit and are little more the soulless husks these days.

You.

[-] Kaz@lemmy.org 8 points 1 month ago

This, because all management does is communicate they think it's amazing..

Try and get it to do complicated or edge case things and it struggles, but management never ever touch complicated stuff! They offload it

We just had an all hands where they were circlejerking about how incredible “AI” is. Then they started talking about OKRs around using that shit on a regular basis.

On the one hand, I’m more than a little peeved that none of the pointed and cogent concerns that I have raised on personal, professional, hobbyist, sustainability, environmental, public infrastructure, psychological, social, or cultural grounds - backed up with multiple articles and scientific studies that I have provided links to in previous all-hands meetings - have been met with anything more than hand-waving before being simply ignored outright.

On the other hand, I’m just going to make a fucking cron job pointed at a script that hits the LLM API they’re logging usage on, asking it to summarize the contents, intent, capabilities, advantages, and drawbacks of random GitHub repos over a certain SLOC count. There’s a part of me that feels bad for using such a wasteful service like in such a wasteful fashion. But there’s another part of me that is more than happy to waste their fucking money on LLM tokens if they’re gonna try to make me waste my time like that.

[-] acchariya@lemmy.world 15 points 1 month ago

If you have to define OKRs to get people to use a tool, perhaps the tool is not a good investment.

Hey man you are preaching to the choir here lol

[-] luciferofastora@feddit.org 6 points 1 month ago

Textbook example of "When a measure becomes a target, it is no longer a good measure" (if it ever was)

load more comments (3 replies)
[-] Whats_your_reasoning@lemmy.world 24 points 1 month ago

I don’t work with computers or coding, yet even in early childhood education/therapy some people are pushing for AI. Someone used it to make “busy scene” pictures for students to find specific things in. I hate using them. Prior to this, we used “busy scene” images that are easy to find online, full of quirky, funny details that the kids enjoy spotting.

But I can barely look at the slop images that were generated. So many of the characters have faces that look like wax figures left in the hot summer sun. The “toys” in the scene are nonsensical shapes somewhere between unusable building blocks and poorly-formed puzzle pieces. Looking at the previous, human-made pictures brought me joy, but this AI garbage is a mess that makes me sad. There’s no direction, no fun details to find, just a chaotic, repetitive scene. I bet the kids I work with could draw something more interesting than this.

[-] Hazor@lemmy.world 13 points 1 month ago

I've never understood these use cases, pushing for generative AI in places where there's already an abundance of human-made resources. Often for free. Is it just laziness? A case of "Why take 2 minutes for a Google search when I could take 1 minute for a generative AI prompt?"

[-] fibojoly@sh.itjust.works 21 points 1 month ago

Our new tech lead loves fucking AI, which let's him refactor our terraform (I was already doing that), write pipelines in gitlab, and lots of other shiny cool things (after many many many attempts, if his commit history is any indication).

Funnily, he won't touch our legacy code. Like, he just answers "that's outside my perimeter" when he's clearly the one who should be helping us handle that shit. Also it's for a mission critical part of our company. But no, outside his perimeter. Gee I wonder why.

[-] LordCrom@lemmy.world 21 points 1 month ago

I was asked to create a simple script... Great, I could have knocked that out in maybe 3 or 4 hours.

Boss insisted I use A.I. ... Fine whatever.

The code it spit out was OK, but didn't work... So I took it and started re coding and fixing the bugs.

It took over 3 hours to get that sloppy code to a working state.

Boss asked why it took so long, ai works in seconds. He didn't understand that I had to fix that crap code he forced me to use

Look, ai does pattern matching like a champ. But it can not create... It doesn't imagine...

[-] Reygle@lemmy.world 14 points 1 month ago

Honestly at this point AI is bad and human critical thinking is the worst I've ever seen in my life.

I know people that I expect would collapse inward without AI holding their hands, and here's the surprise of this statement. Can't wait to see it happen. I'm really holding on for the implosions and REALLY hope they happen when I'm nearby.

[-] bridgeenjoyer@sh.itjust.works 4 points 1 month ago

Be the change you want to see in the world. Data centers need to dissappear :)

load more comments (3 replies)
[-] jjjalljs@ttrpg.network 12 points 1 month ago

I used some AI at work to do some stuff in polars, because I don't really know that library very well.

As a result I have a function that does what I asked for (I wrote tests), but I don't understand it and didn't really learn anything. Not a great trade.

[-] despite_velasquez@lemmy.world 12 points 1 month ago

It's undeniable that AI is great at problems with tight feedback loops, like software engineering.

Most jobs don't have the tight feedback loops that software engineering has

[-] CandleTiger@programming.dev 25 points 1 month ago

It's undeniable that AI is great at problems with tight feedback loops, like software engineering

I, CandleTiger, do hereby deny that AI is great at software engineering.

[-] vrighter@discuss.tchncs.de 19 points 1 month ago

it is totally deniable. Because it's simply not true. It's been studied.

[-] laranis@lemmy.zip 11 points 1 month ago* (last edited 1 month ago)

One nit: they're good at writing code. Specifically, code that has already been written. Software Engineers and Computer Scientists still need to exist for technology to evolve.

load more comments (3 replies)
[-] Reygle@lemmy.world 10 points 1 month ago

Most of my conversations with my management is forced to be talking them out of the heinous baloney they're convinced of because "Gemini says.." No boss, Gemini made some shit up. Scroll past it or stop wasting my time.

[-] Jankatarch@lemmy.world 9 points 1 month ago

And the only reason they can get away with not charging the training and computation costs is bunch of rich people essentially gambling a small portion of their generational wealth.

[-] Blaster_M@lemmy.world 9 points 1 month ago

Dilbert manager energy

[-] hexagonwin@lemmy.sdf.org 7 points 1 month ago

it's just great at pretending to do something, good enough to trick stupid execs

[-] Formfiller@lemmy.world 6 points 1 month ago

One of my college professors is mandating chat gpt

load more comments (2 replies)
[-] RBWells@lemmy.world 5 points 1 month ago* (last edited 1 month ago)

They are pushing it at my work. I spent half a day trying to train Copilot to build me a report from one PDF and one way too formatted excel sheet, no go, the too-formatted excel stumped it, I had to clean it up first. I am booking payroll and the fucking system we use refuses to generate a report with the whole cost, there is one for gross to net and a separate one, not available in excel, and not in a format that can be put in a spreadsheet, for the employer cost. I need to split the total into departments & job cost codes. (ETA the payroll system also doesn't handle the job costing, even after I get total cost, more manual work)

I worked with the department who sends me this trash and glory be, there was a CSV for the gross to net one. Finally wrestled it into getting this right and asked it "what do I ask next time to get this result the first time" and it does now do a reliable job of this BUT:

All it's doing is making a report that the payroll system really and truly ought to be capable of producing. And I guess letting me honestly say, "sure boss, I use the copilot". It's not adding anything at all, just making up for a glaring defect in the reporting available from the payroll company. Give me access to that system and I could build the report, it doesn't need AI at all.

[-] echodot@feddit.uk 5 points 1 month ago

This is my problem with AI where I work. I can use it to get the result I want (eventually) although I have to do some editing.

But I can also use the python script that has been working fine for years, which gets me 99% of the way there in 15 seconds. It would be faster but the script is terribly unoptimized because I'm not a programmer.

[-] Tollana1234567@lemmy.today 4 points 1 month ago

they are stringing it along so they can get thier golden parachutes and bounce.

load more comments
view more: next ›
this post was submitted on 28 Jan 2026
570 points (100.0% liked)

Fuck AI

6448 readers
1160 users here now

"We did it, Patrick! We made a technological breakthrough!"

A place for all those who loathe AI to discuss things, post articles, and ridicule the AI hype. Proud supporter of working people. And proud booer of SXSW 2024.

AI, in this case, refers to LLMs, GPT technology, and anything listed as "AI" meant to increase market valuations.

founded 2 years ago
MODERATORS