1570
submitted 1 week ago* (last edited 1 week ago) by mayabuttreeks@lemmy.ca to c/fuck_ai@lemmy.world

link to archived Reddit thread; original post removed/deleted

top 50 comments
sorted by: hot top controversial new old
[-] luciferofastora@feddit.org 46 points 5 days ago

I'm a data analyst and primary authority on the data model of a particular source system. Most questions for figures from that system that can't be answered directly and easily in the frontend end up with me.

I had a manager show me how some new LLM they were developing (which I had contributed some information about the model to) could quickly answer some questions that usually I have to answer manually, as part of a pitch to make me switch to his department so I can apply my expertise for improving this fancy AI instead of answering questions manually.

He entered a prompt, got a figure that I knew wasn't correct and I queried my data model for the same info, with a significantly different answer. Given how much said manager leaned on my expertise in the first place, he couldn't very well challenge my results and got all sheepish about how the AI still in development and all.

I don't know how that model arrived at that figure. I don't know if it generated and ran a query against the data I'd provided. I don't know if it just invented the number. I don't know how the devs would figure out the error and how to fix it. But I do know how to explain my own queries, how to investigate errors and (usually) how to find a solution.

Anyone who relies on a random text generator - no matter how complex that generation method to make it sound human - to generate facts is dangerously inept.

[-] jj4211@lemmy.world 15 points 5 days ago

I don’t know how the devs would figure out the error and how to fix it.

This is like the biggest factor that people don't get when thinking of these models in the context of software. "Oh it got it wrong, but the developers will fix it in an update". Nope, they can fix traditional software mistakes, LLM output and machine learning things... They can throw more training data at it (which sometimes just changes what it gets wrong) and hope for the best, they can do better job at curating the context window to give the model the best shot at outputting the right stuff (e.g. the guy who got Opus to generate a slow crappy buggy compiler had to traditionally write a filter to find and show only the 'relevent' compiler output back to the models), they can try to generate code to do what you want and have you review the code and correct issues. But debugging and fixing the model itself... that's just not a thing at all.

I was in a meeting where a sales executive was bragging about the 'AI sales agent' they were working, but admitting frustration with the developres and a bit confused why the software developers weren't making progress when those same developers always made decent progress before, and they should be able to do this even faster because they have AI tools to help them... It eternally seemed in a state that almost worked but not quite no matter what model or iteration they went to, no matter how much budget they allocated, when it came down to the specific facts and figures it would always screw up.

I cannot understand how long these executives wade in the LLM pool and still believes in capabilities beyond what anyone has experienced.

load more comments (2 replies)
[-] pseudo@jlai.lu 52 points 6 days ago

When you delegate, to a person, a tool or a process, you check the result. You make sure that the delegated tasks get done and correctly and that the results are what is expected.

Finding that it is not the case after months by luck shows incompetence. Look for the incompetent.

[-] flying_sheep@lemmy.ml 13 points 5 days ago* (last edited 5 days ago)

Yeah. Trust is also a thing, like if you delegate to a person that you've seen getting the job done multiple times before, you won't check as closely.

But this person asked to verify and was told not to. Insane.

load more comments (3 replies)
[-] mr_sunburn@lemmy.ml 88 points 6 days ago* (last edited 6 days ago)

I raised this as a concern at the corporate role I work in when an AI tool that was being distributed and encouraged for usage showed two hallucinated data points that were cited in a large group setting. I happened to know my area well, the data was not just marginally wrong but way off, and I was able to quickly check the figures. I corrected it in the room after verifying on my laptop and the reaction in the room was sort of a harmless whoops. The rest of the presentation continued without a seeming acknowledgement that the rest of the figures should be checked.

When I approached the head of the team that constructed the tool after the meeting and shared the inaccuracies and my concerns, he told me that he'd rather have more data fluency through the ease of the tool and that inaccuracies were acceptable because of the convenience and widespread usage.

I suspect stories like this are happening across my industry. Meanwhile, the company put out a press release about our AI efforts (literally using Gemini's Gem tool and custom ChatGPTs seeded with Google Drive) as something investors should be very excited about.

[-] squaresinger@lemmy.world 78 points 6 days ago

When I approached the head of the team that constructed the tool after the meeting and shared the inaccuracies and my concerns, he told me that he’d rather have more data fluency through the ease of the tool and that inaccuracies were acceptable because of the convenience and widespread usage.

"I prefer more data that's completely made up over less data that is actually accurate."

This tells you everything you need to know about your company's marketing and data analysis department and the whole corporate leadership.

Potemkin leadership.

[-] whoisearth@lemmy.ca 26 points 6 days ago

Honestly this is not a new problem and is a further expression of the larger problem.

"Leadership" becomes removed from the day to day operations that run the organization and by nature the "cream" that rises tend to be sycophantic in nature. Our internal biases at work so it's no fault of the individual.

Humanity is their own worst enemy lol

[-] squaresinger@lemmy.world 16 points 6 days ago

It is not a new problem and that has been the case for a long time. But it's a good visualization of it.

Everyone in a company has their own goals, from the lowly actual worker who just wants to pay the bills and spend as little effort on it as possible, to departments which want to justify their useless existence, to leadership who mainly wants to look good towards the investors to get a nice bonus.

That some companies end up actually making products that ship and that people want to use is more of an unintended side effect than the intended purpose of anyone's work.

[-] altasshet@lemmy.ca 26 points 6 days ago

That makes no sense. The inaccuracies are even less acceptable with widespread use!

load more comments (2 replies)
[-] BlameTheAntifa@lemmy.world 19 points 6 days ago

It’s technological astrology. We’re doomed.

load more comments (17 replies)
[-] AnnaFrankfurter@lemmy.ml 66 points 6 days ago

I work in a regulated sector and our higher ups are pushing AI so much. And there response to AI hallucinations is to just put a banner on all internal AI tools to cross verify and have some quarterly stupid "trainings" but almost everyone I know never checks and verifies the output. And I know of atleast 2 instances where because AI hallucinated some numbers we sent out extra money to a third party.

[-] wizardbeard@lemmy.dbzer0.com 33 points 6 days ago

My workplace (finance company) bought out an investments company for a steal because they were having legal troubles, managed to pin it on a few individuals, then fired the individuals under scrutiny.

Our leadership thought the income and amount of assets they controlled was worth the risk.

This new group has been the biggest pain in the ass. Complete refusal to actually fold into the company culture, standards, even IT coverage. Kept trying to sidestep even basic stuff like returning old laptops after upgrades.

When I was still tech support, I had two particularly fun interactions with them. One was when it was discovered that one of their top earners got fired for shady shit, then they discovered a month later that he had set his mailbox to autoreply to every email pointing his former clients to his personal email. Then, they hired back this guy and he lasted a whole day before they caught him trying to steal as much private company info as he could grab. The other incident was when I got a call from this poor intern they hired, then dumped the responsibility for this awful home grown mess of Microsoft Access, Excel, and Word docs all linked over ODBC on this kid. Our side of IT refused to support it and kept asking them to meet with project management and our internal developers to get it brought up into this century. They refused to let us help them.

In the back half of last year, our circus of an Infosec Department finally locked down access to unapproved LLMs and AI tools. Officially we had been restricted to one specific one by written policy, signed by all employees, for over a year but it took someone getting caught by their coworker putting private info into a free public chatbot for them to enforce it.

Guess what sub-company is hundreds of thousands of dollars into a shadow IT project that has went through literally none of the proper channels to start using an explicitly disallowed LLM to process private customer data?

load more comments (1 replies)
load more comments (2 replies)
[-] MuteDog@lemmy.world 22 points 5 days ago

Apparently that reddit post itself was generated with AI. Using AI to bash AI is an interesting flex.

[-] nihluskryik@lemmy.ml 7 points 5 days ago

How did people find out it was AI generated? Seems natural to me. Scary.

[-] MuteDog@lemmy.world 1 points 1 day ago

an acquaintance sent it through Pangram which says it's 100% AI. How reliable that detection is IDK ¯_(ツ)_/¯

load more comments (2 replies)
[-] sp3ctr4l@lemmy.dbzer0.com 56 points 6 days ago* (last edited 6 days ago)

As an unemployed data analyst / econometrician:

lol, rofl, perhaps even... lmao.

Nah though, its really fine, my quality of life is enormously superior barely surviving off of SSDI and not having to explain data analytics to thumb sucking morons (VPs, 90% of other team leads), and either fix or cover all their mistakes.

Yeah, sure, just have the AI do it, go nuts.

I am enjoying my unexpected early retirement.

[-] Bubbaonthebeach@lemmy.ca 38 points 6 days ago

To everyone I've talked to about AI, I've suggested a test. Take a subject that they know they are an expert at. Then ask AI questions that they already know the answers to. See what percentage AI gets right, if any. Often they find that plausible sounding answers are produced however, if you know the subject, you know that it isn't quite fact that is produced. A recovery from an injury might be listed as 3 weeks when it is average 6-8 or similar. Someone who did not already know the correct information, could be damaged by the "guessed" response of AI. AI can have uses but it needs to be heavily scrutinized before passing on anything it generates. If you are good at something, that usually means you have to waste time in order to use AI.

[-] NABDad@lemmy.world 17 points 6 days ago

I had a very simple script. All it does is trigger an action on a monthly schedule.

I passed the script to Copilot to review.

It caught some typos. It also said the logic of the script was flawed and it wouldn't work as intended.

I didn't need it to check the logic of the script. I knew the logic was sound because it was a port of a script I was already using. I asked because I was curious about what it would say.

After restating the prompt several times, I was able to get it to confirm that the logic was not flawed, but the process did not inspire any confidence in Copilot's abilities.

load more comments (4 replies)
[-] tover153@lemmy.world 47 points 6 days ago

Before anything else: whether the specific story in the linked post is literally true doesn’t actually matter. The following observation about AI holds either way. If this example were wrong, ten others just like it would still make the same point.

What keeps jumping out at me in these AI threads is how consistently the conversation skips over the real constraint.

We keep hearing that AI will “increase productivity” or “accelerate thinking.” But in most large organizations, thinking is not the scarce resource. Permission to think is. Demand for thought is. The bottleneck was never how fast someone could draft an email or summarize a document. It was whether anyone actually wanted a careful answer in the first place.

A lot of companies mistook faster output for more value. They ran a pilot, saw emails go out quicker, reports get longer, slide decks look more polished, and assumed that meant something important had been solved. But scaling speed only helps if the organization needs more thinking. Most don’t. They already operate at the minimum level of reflection they’re willing to tolerate.

So what AI mostly does in practice is amplify performative cognition. It makes things look smarter without requiring anyone to be smarter. You get confident prose, plausible explanations, and lots of words where a short “yes,” “no,” or “we don’t know yet” would have been more honest and cheaper.

That’s why so many deployments feel disappointing once the novelty wears off. The technology didn’t fail. The assumption did. If an institution doesn’t value judgment, uncertainty, or dissent, no amount of machine assistance will conjure those qualities into existence. You can’t automate curiosity into a system that actively suppresses it.

Which leaves us with a technology in search of a problem that isn’t already constrained elsewhere. It’s very good at accelerating surfaces. It’s much less effective at deepening decisions, because depth was never in demand.

If you’re interested, I write more about this here: https://tover153.substack.com/

Not selling anything. Just thinking out loud, slowly, while that’s still allowed.

load more comments (3 replies)
[-] AllNewTypeFace@leminal.space 34 points 6 days ago

My broseph in Christ, what did you think a LLM was?

[-] GalacticSushi 27 points 6 days ago

Bro, just give us a few trillion dollars, bro. I swear bro. It'll be AGI this time next year, bro. We're so close, bro. I just need need some money, bro. Some money and some god-damned faith, bro.

load more comments (1 replies)
load more comments (2 replies)
[-] untorquer@lemmy.world 47 points 6 days ago

This would suggest the leadership positions aren't required for the function of the business.

[-] PapaStevesy@lemmy.world 23 points 6 days ago

This has always been the case, in every industry.

load more comments (1 replies)
[-] db_null@lemmy.dbzer0.com 17 points 5 days ago

I guarantee you this is how several, if not most, fortune 500 companies currently operate. The 50k DOW is not just propped up by the circlejerk spending on imaginary RAM. There are bullshit reports being generated and presented every day.

I patiently wait. There is a diligent bureaucrat sitting somewhere going through fiscal reports line by line. It won't add up.. receipts will be requested.. bubble goes pop

load more comments (1 replies)
[-] mudkip@lemdro.id 23 points 6 days ago

Ah yes, what a surprise. The random word generator gave you random numbers that aren't actually real.

[-] Strider@lemmy.world 36 points 6 days ago

It doesn't matter. Management wants this and will not stop until they run against a wall at full speed. 🤷

[-] Jankatarch@lemmy.world 21 points 6 days ago

Tbf at this point corporate economy is made up anyway so as long as investors are gambling their endless generational wealth does it matter?

[-] wabasso@lemmy.ca 9 points 5 days ago

This is how I’m starting to see it too. Stock market is just the gambling statistics of the ownership class. Line goes down and we’re supposed to pretend it’s harder to grow food and build houses all of a sudden.

load more comments (1 replies)
[-] FlashMobOfOne@lemmy.world 38 points 6 days ago

Jesus Christ, you have to have a human validate the data.

[-] 474D@lemmy.world 33 points 6 days ago

Exactly, this is like letting excel auto-fill finish the spreadsheet and going "looks about right"

[-] FlashMobOfOne@lemmy.world 26 points 6 days ago

And that's a good analogy, as people have posted screenshots of Copilot getting basic addition wrong in Excel.

Whoever implemented this agent without proper oversight needs to be fired.

[-] hector@lemmy.today 22 points 6 days ago

Except the ceo and executives ultimately responsible will blame their underlings that will be fired, even though it was an executive level decision. They didn't get to the pinnacle of corporate governance by admitting mistakes. That's not what they were taught at their ivy league schools, they were taught to lie and cheat to steal, and further slander their victims to excuse it.

It was bad before the current president set his outstanding example for the rest of the country. See what being a lying cheating piece of shit gets you? Everything. Nothing matters. We have the wrong people in charge across the board, from business to government to institutions.

load more comments (1 replies)
load more comments (4 replies)
load more comments (2 replies)
[-] wonderingwanderer@sopuli.xyz 37 points 6 days ago

Dumbasses. Mmm, that's good schadenfreude.

[-] CaptPretentious@lemmy.world 26 points 6 days ago

My workplace, the senior management, is going all in on Copilot. So much so that at the end of last year to told us to use Copilot for year end reviews! Even provided a prompt to use, told us to link it to Outlook (not sure why, since our email retention isn't very long)... but whatever.

I tried it, out of curiosity because I had no faith. It started printing out stats for things that never happened. It provided a 35% increase here, a 20% decress there, blah blah blah. It didn't actually highlight anything I do or did. And I'm banking that a human will partially read my review, not just use AI.

If someone read it, I'm good. If AI reads it, I do wonder if I screwed myself. Since senior mgmt is just offloading to AI...

load more comments (1 replies)
[-] Decq@lemmy.world 16 points 6 days ago

Surely this is just fraud right? Seeing they have a board directors they have shareholders probably? I feel they should at least all get fired, if not prosecuted. This lack of competency is just criminal to me.

[-] Jankatarch@lemmy.world 16 points 6 days ago

Are you suggesting we hold people responsible?

load more comments (2 replies)
[-] ICastFist@programming.dev 10 points 5 days ago

I-want-to-believe.jpg

[-] nonentity@sh.itjust.works 16 points 6 days ago

The output from tools infected with LLMs can intrinsically only ever be imprecise, and should never be trusted.

[-] Snowclone@lemmy.world 17 points 6 days ago

I hope they sue whoever sold it to them. it's not artificial intelligence, it's a machine learning chat bot. they may as well be running their company with a magic eight ball.

[-] titanicx@lemmy.zip 12 points 6 days ago

I fucking love this. It's amazing.

[-] TankovayaDiviziya@lemmy.world 9 points 6 days ago* (last edited 5 days ago)

This is why I hate search engines promoting AI results when you are researching for something. It is confidently giving incorrect responses. I asked for sources on one LLM model before while using Duckduckgo, and it just told me that there are no sources and the information is based on broad knowledge. At one point, I challenged the AI that it is wrong, but it insisted it doesn't. It turns out that it is citing a years old source written by a different bot long ago. But on the one hand, most of you are probably familiar that on occasions that the AI is incorrect and you challenge it, it will relent, although it will be a sycophant even though you yourself are actually incorrect. This is Schrödinger's AI.

load more comments
view more: next ›
this post was submitted on 15 Feb 2026
1570 points (100.0% liked)

Fuck AI

5990 readers
2172 users here now

"We did it, Patrick! We made a technological breakthrough!"

A place for all those who loathe AI to discuss things, post articles, and ridicule the AI hype. Proud supporter of working people. And proud booer of SXSW 2024.

AI, in this case, refers to LLMs, GPT technology, and anything listed as "AI" meant to increase market valuations.

founded 2 years ago
MODERATORS