The reason we hate AI is cause it's not for us. It's developed and controlled by people who want to control us better. It is a tool to benefit capital, and capital always extracts from labour, AI only increases the efficiency of exploitation because that's what it's for. If we had open sourced public AI development geared toward better delivering social services and managing programs to help people as a whole, we would like it more. Also none of this LLM shit is actually AI, that's all branding and marketing manipulation, just a reminder.
There was a thread of people pointing out biases that exist on Lemmy, and some commenters obviously mention anti-AI people. Cue the superiority complex (cringe).
Some of these people actually believe UBI will become a thing for people who lose their jobs due to AI, meanwhile the billionaire class is actively REMOVING benefits for the poor to further enrich themselves.
What really gets me is when people KNOW what the hell we’re talking about, but then mention the 1% use case scenario where AI is actually useful (for STEM) and act like that’s what we’re targeting. Like no, motherfucker. We’re talking about the AI that’s RIGHT IN FRONT OF US, contributing to a future where we’re all braindead ai-slop dependent, talentless husks of human beings. Not to mention unemployed now.
It's actually a useful tool..... If it were not too often used for so very dystopian purposes.
But it's not just AI. All services, systems, etc... So many are just money grabs, hate, opinion making or general manipulation.... I have many things I hate more about "modern" society, than I do as to how LLMs are used.
I like the lemmy mindset far more than reddit and only on the AI topic people here are brainlessly focused on the tool instead of the people using the tool.
It's extremely wasteful. Inefficient to the extreme on both electricity and water. It's being used by capitalists like a scythe. Reaping millions of jobs with no support or backup plan for its victims. Just a fuck you and a quip about bootstraps.
It's cheapening all creative endeavors. Why pay a skilled artist when your shitbot can excrete some slop?
What's not to hate?
As with almost all technology, AI tech is evolving into different architectures that aren't wasteful at all. There are now powerful models we can run that don't even require a GPU, which is where most of that power was needed.
The one wrong thing with your take is the lack of vision as to how technology changes and evolves over time. We had computers the size of rooms to run processes that our mobile phones can now run hundreds of times more efficiently and powerfully.
Your other points are valid, people don't realize how AI will change the world. They don't realize how soon people will stop thinking for themselves in a lot of ways. We already see how critical thinking drops with lots of AI usage, and big tech is only thinking of how to replace their staff with it and keep consumers engaged with it.
It was also inefficient for a computer to play chess in 1980. Imagine using a hundred watts of energy and a machine that costed thousands of dollars and not being able to beat an average club player.
Now a phone will cream the world's best in chess and even go
Give it twenty years to become good. It will certainly do more stuff with smaller more efficient models as it improves
If you want to argue in favor of your slop machine, you're going to have to stop making false equivalences, or at least understand how its false. You can't make ground on things that are just tangential.
A computer in 1980 was still a computer, not a chess machine. It did general purpose processing where it followed whatever you guided it to. Neural models don't do that though; they're each highly specialized and take a long time to train. And the issue isn't with neural models in general.
The issue is neural models that are being purported to do things they functionally cannot, because it's not how models work. Computing is complex, code is complex, adding new functionality that operates off of fixed inputs alone is hard. And now we're supposed to buy that something that creates word relationship vector maps is supposed to create new?
For code generation, it's the equivalent of copying and pasting from Stack Overflow with a find/replace, or just copying multiple projects together. It isn't something new, it's kitbashing at best, and that's assuming it all works flawlessly.
With art, it's taking away creation from people and jobs. I like that you ignored literally every point raised except for the one you could dance around with a tangent. But all these CEOs are like "no one likes creating art or music". And no, THEY just don't want to spend time creating themselves nor pay someone who does enjoy it. I love playing with 3D modeling and learning how to make the changes I want consistently, I like learning more about painting when texturing models and taking time to create intentional masks. I like taking time when I'm baking things to learn and create, otherwise I could just go buy a box mix of Duncan Hines and go for something that's fine but not where I can make things when I take time to learn.
And I love learning guitar. I love feeling that slow growth of skill as I find I can play cleaner the more I do. And when I can close my eyes and strum a song, there's a tremendous feeling from making this beautiful instrument sing like that.
Oh my God, that's perfect. It's kit bashing. That's exactly how it feels.
Stockfish can't play Go. The resources you spent making the chess program didn't port over.
In the same way you can use a processor to run a completely different program, you can use a GPU to run a completely different model.
So if current models can't do it, you'd be foolish to bet against future models in twenty years not being able to do it.
Show me the chess machine that caused rolling brown outs and polluted the air and water of a whole city.
I'll wait.
Servers have been eating up a significant portion of electricity for years before AI. It's whether we get something useful out of it that matters
Not even remotely close to this scale... At most you could compare the energy usage to the miners in the crypto craze, but I'm pretty sure that even that is just a tiny fraction of what's going on right now.
Not the same. The underlying tech of llm's has mqssively diminishing returns. You can akready see it, could see it a year ago if you looked. Both in computibg power and required data, and we do jot have enough data, literally have nit created in all of history.
This is not "ai", it's a profoubsly wasteful capitalist party trick.
Please get off the slop and re-build your brain.
That's the argument Paul Krugman used to justify his opinion that the internet peaked in 1998.
You still need to wait for AI to crash and a bunch of research to happen and for the next wave to come. You can't judge the internet by the dot com crash, it became much more impactful later on
No. No i don't. I trust alan Turing.
NB: Alan Turing famously invented ChatGPT
One of the major contributors to early versions. Then they did the math and figured out it was a dead end. Yes.
Also one of the other contributors (weizenbaum i think?) pointed out that not only was it stupid, it was dabgeroys and made people deranged fanatical devotees impervious to reason, who would discard their entire intellect and education to cult about this shit, in a madness no logic could breach. And that's just from eliza.
It seems like you are implying that models will follow Moore's law, but as someone working on "agents" I don't see that happening. There is a limitation with how much can be encoded and still produce things that look like coherent responses. Where we would get reliable exponential amounts of training data is another issue. We may get "ai" but it isn't going to be based on llms
You can't predict how the next twenty years of research improves on the current techniques because we haven't done the research.
Is it going to be specialized agents? Because you don't need a lot of data to do one task well. Or maybe it's a lot of data but you keep getting more of it (robot movement? stock market data?)
Twenty years is a very long time, also "good" is relative. I give it about 2-3 years until we can run a model as powerful as Opus 4.1 on a laptop.
There will inevitably be a crash in AI and people still forget about it. Then some people will work on innovative techniques and make breakthroughs without fanfare
It's corporate controlled, it's a way to manipulate our perception, it's all appearance no substance, it's an excuse to hide incompetence under an algorithm, it's cloud service orientated, it's output is highly unreliable yet hard to argue against to the uninformed. Seems about right.
And it will not be argued with. No appeal, no change of heart. Which is why anyone using it to mod or as cs needs to be set on fire.
Technology
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related news or articles.
- Be excellent to each other!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
- Check for duplicates before posting, duplicates may be removed
- Accounts 7 days and younger will have their posts automatically removed.