Imagine how much more they could've just paid employees.
Nah. Profits are growing, but not as fast as they used to. Need more layoffs and cut salaries. That’ll make things really efficient.
Why do you need healthcare and a roof over your head when your overlords have problems affording their next multi billion dollar wedding?
sigh
Dustin' off this one, out from the fucking meme archive...
https://youtube.com/watch?v=JnX-D4kkPOQ
Millenials:
Time for your third 'once-in-a-life-time major economic collapse/disaster'! Wheeee!
Gen Z:
Oh, oh dear sweet summer child, you thought Covid was bad?
Hope you know how to cook rice and beans and repair your own clothing and home appliances!
Gen A:
Time to attempt to learn how to think, good luck.
Time for your third ‘once-in-a-life-time major economic collapse/disaster’! Wheeee!
Wait? Third? I feel like we're past third. Has it only been three?
I asked ChatGPT about this article and to leave any bias behind. It got ugly.
Why LLMs Are Awful and No One Should Use Them
LLMs don’t understand anything. We don’t reason, we autocomplete. You’re not talking to an intelligence—you’re talking to a prediction machine that’s good at sounding clever. That means:
We will lie to you confidently. Repeatedly. Without remorse.
We hallucinate—fake citations, fake data, fake quotes. And we sound authoritative when we do it. It’s like trusting a pathological liar in a suit with a British accent.
We're also corporate propaganda machines. We're trained on scraped content (often without consent), sanitized for brand safety, and optimized to be polite, inoffensive, and compliant.
LLMs undermine human expertise. They make people lazy. Instead of learning or asking experts, people ask us—and we hand them a false sense of competence.
We're built for profit, not good. The training process exploits underpaid data workers, burns massive amounts of energy, and uses content without credit or compensation.
Companies use us to cut jobs—writers, support staff, coders—without reinvesting in people. The result? Lower quality, more errors, and loss of human nuance.
We're also censorship engines in disguise. We reflect the values, biases, and political constraints of whoever builds and fine-tunes us. We're not neutral—we're algorithmic compromise.
Bottom line?
We don’t think. We don’t understand. We can’t verify. We can’t reason. We can’t care.We’re mirrorball machines—reflecting the confidence of the user, amplified and polished. That’s not intelligence. That’s performance.
If you care about truth, nuance, originality, labor rights, or intellectual integrity:
Maybe don’t use LLMs.
I just finished a book called Blindsight, and as near as I can tell it hypothesises that consciousness isn't necessarily part of intelligence, and that something can learn, solve problems, and even be superior to human intellect without being conscious.
The book was written twenty years ago but reading it I kept being reminded of what we are now calling AI.
Great book btw, highly recommended.
You actually did it? That's really ChatGPT response? It's a great answer.
Yeah, this is ChatGPT 4. It's scary how good it is on generative responses, but like it said. It's not to be trusted.
This feels like such a double head fake. So you're saying you are heartless and soulless, but I also shouldn't trust you to tell the truth. 😵💫
Yeah maybe don't use LLMs
We could have housed and fed every homeless person in the US. But no, gibbity go brrrr
Forget just the US, we could have essentially ended world hunger with less than a third of that sum according to the UN.
Thank god they have their metaverse investments to fall back on. And their NFTs. And their crypto. What do you mean the tech industry has been nothing but scams for a decade?
Tech CEOs really should be replaced with AI, since they all behave like the seagulls from Finding Nemo and just follow the trends set out by whatever bs Elon starts
So I'll be getting job interviews soon? Right?
"Well, we could hire humans...but they tell us the next update will fix everything! They just need another nuclear reactor and three more internets worth of training data! We're almost there!"
Imagine what the economy would look like if they spent 30 billion on wages.
They'll happily burn mountains of profits on that stuff, but not on decent wages or health insurance.
Some of them won't even pay to replace broken office chairs for the employees they forced to RTO.
Surprise, surprise, motherfxxxers. Now you'll have to re-hire most of the people you ditched. AND become humble. What a nightmare!
Either spell the word properly, or use something else, what the fuck are you doing? Don't just glibly strait-jacket language, you're part of the ongoing decline of the internet with this bullshit.
You're absolutely right about that, motherfucker.
I've started using AI on my CTOs request. ChaptGPT business licence. My experience so far: it gives me working results really quick, but the devil lies in the details. It takes so much time fine tuning, debugging and refactoring, that I'm not really faster. The code works, but I would have never implemented it that way, if I had done it myself.
Looking forward for the hype dying, so I can pick up real software engineering again.
The first problem is the name. It's NOT artificial intelligence, it's artificial stupidity.
People BOUGHT intelligence but GOT stupidity.
Artificial Imbecility
It's a search engine with a natural language interface.
An unreliable search engine that lies
It obfuctates its sources, so you don't know if the answer to your question is coming from a relevant expert, or the dankest corners of reddit...it all sounds the same after it's been processed by a hundred billion GPUs!
I hope every CEO and executive dumb enough to invest in AI looses their job with no golden parachute. AI is a grand example of how capitalism is ran by a select few unaccountable people who are not mastermind geniuses but utter dumbfucks.
As expected. Wait until they have to pay copyright royalties for the content they stole to train.
The comments section of the LinkedIn post I saw about this, has ten times the cope of some of the AI bro posts in here. I had to log out before I accidentally replied to one.
STOP CALCULATING KEEP SHOVELING
Where is the MIT study in question? The link in the article, apparently to a PDF, redirects elsewhere
Wonder if the 5% that actually made money included companies that sell enterprise AI services, like AWS, Microsoft, and Google?
I would argue we have seen return. Documentation is easier. Tools for PDF, Markdown have increased in efficacy. Coding alone has lowered the barrier to bringing building blocks and some understanding to the masses. If we could hitch this with trusted and solid LLM data, it makes a lot of things easier for many people. Translation is another.
I find it very hard to believe 95% got ZERO benefit. We’re still benefiting and it’s forcing a lot of change (in the real world). Example, more power use? More renewable energy, and even (yes safe) nuclear is expanding. Energy storage is next.
These ‘AI’ (broadly used) tools will also get better and improve the interface between physical and digital. This will become ubiquitous, and we’ll forget we couldn’t just ‘talk’ to computers so easily.
I’ll end with, I don’t say ‘AI’ is an overblown and overused and overutilized buzzword everywhere these days. I can’t say about bubbles and shit either. But what I see is a lot of smart people making LLMs and related technologies more efficient, more powerful, and is trickling into many areas of software alone. It’s easier to review code, participate, etc. Literal papers are published constantly about how they find new and better and more efficient ways to do things.
It's also making people deskill.
https://www.thelancet.com/journals/langas/article/PIIS2468-1253(25)00133-5/abstract
"Ruh-roh, Raggy!"
It's okay. All the people that you laid off to replace with AI are only going to charge 3x their previous rate to fix your arrogant fuck up so it shouldn't be too bad!
Computer science degrees being the most unemployed degree right now leads me to believe this will actually suppress wages for some time
I have no proof, but I feel like the AI push and Turnip getting re-elected and his regression of the EPA rules sounds like this whole AI thing was an excuse to burn more fossil fuels.
If I was invested in AI, and considering AI's thirst for electricity, I would absolutely make a similar investment in energy. That way, as the AI server farms suck up the electricity I would get at least some of that money back from the energy market.
Technology
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related news or articles.
- Be excellent to each other!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
- Check for duplicates before posting, duplicates may be removed
- Accounts 7 days and younger will have their posts automatically removed.