105
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
this post was submitted on 13 Jun 2024
105 points (100.0% liked)
Linux
48073 readers
614 users here now
From Wikipedia, the free encyclopedia
Linux is a family of open source Unix-like operating systems based on the Linux kernel, an operating system kernel first released on September 17, 1991 by Linus Torvalds. Linux is typically packaged in a Linux distribution (or distro for short).
Distributions include the Linux kernel and supporting system software and libraries, many of which are provided by the GNU Project. Many Linux distributions use the word "Linux" in their name, but the Free Software Foundation uses the name GNU/Linux to emphasize the importance of GNU software, causing some controversy.
Rules
- Posts must be relevant to operating systems running the Linux kernel. GNU/Linux or otherwise.
- No misinformation
- No NSFW content
- No hate speech, bigotry, etc
Related Communities
Community icon by Alpár-Etele Méder, licensed under CC BY 3.0
founded 5 years ago
MODERATORS
I won't rehash the arguments around "AI" that others are best placed to make.
My main issue is AI as a term is basically a marketing one to convince people that these tools do something they don't and its causing real harm. Its redirecting resources and attention onto a very narrow subset of tools replacing other less intensive tools. There are significant impacts to these tools (during an existential crisis around our use and consumption of energy). There are some really good targeted uses of machine learning techniques but they are being drowned out by a hype train that is determined to make the general public think that we have or are near Data from Star Trek.
Addtionally, as others have said the current state of "AI" has a very anti FOSS ethos. With big firms using and misusing their monopolies to steal, borrow and coopt data that isn't theirs to build something that contains that's data but is their copyright. Some of this data is intensely personal and sensitive and the original intent behind the sharing is not for training a model which may in certain circumstances spit out that data verbatim.
Lastly, since you use the term Luddite. Its worth actually engaging with what that movement was about. Whilst its pitched now as generic anti-technology backlash in fact it was a movement of people who saw what the priorities and choices in the new technology meant for them: the people that didn't own the technology and would get worse living and work conditions as a result. As it turned out they were almost exactly correct in thier predictions. They are indeed worth thinking about as allegory for the moment we find ourselves in. How do ordinary people want this technology to change our lives? Who do we want to control it? Given its implications for our climate needs can we afford to use it now, if so for what purposes?
Personally, I can't wait for the hype train to pop (or maybe depart?) so we can get back to rational discussions about the best uses of machine learning (and computing in general) for the betterment of all rather than the enrichment of a few.
It's a surprisingly good comparison especially when you look at the reactions: frame breaking vs data poisoning.
The problem isn't progress, the problem is that some of us disagree with the Idea that what's being touted is actual progress. The things llms are actually good at they've being doing for years (language translations) the rest of it is so inexact it can't be trusted.
I can't trust any llm generated code because it lies about what it's doing, so I need to verify everything it generates anyway in which case it's easier to write it myself. I keep trying it and it looks impressive until it ends up at a way worse version of something I could have already written.
I assume that it's the same way with everything I'm not an expert in. In which case it's worse than useless to me, I can't trust anything it says.
The only thing I can use it for is to tell me things I already know and that basically makes it a toy or a game.
That's not even getting into the security implications of giving shitty software access to all your sensitive data etc.
If you are so keen on correctness, please don't say "LLMs are lying". Lying is a conscious action of deceiving. LLMs are not capable of that. That's exactly the problem: they don't think, they just assemble with probability. If they could lie, they could also produce real answers.
I've never heard anyone explicitly say this but I'm sure a lot of people (i.e. management) think that AI is a replacement for static code. If you have a component with constantly changing requirements then it can make sense, but don't ask an llm to perform a process that's done every single day in the exact same way. Chief among my AI concerns is the amount of energy it uses. It feels like we could mostly wean off of carbon emitting fuels in 50 years but if energy demand skyrockets will be pushing those dates back by decades.
My concern with AI is also with its energy usage. There's a reason OpenAI has tons of datacenters, yet people think it does not take much because "free"!
Right, another aspect of the Luddite movement is that they lost. They failed to stop the spread of industrialization and machinery in factories.
Screaming at a train moving 200kmph hoping it will stop.
But that doesn't mean pushback is doomed to fail this time. "It happened once, therefore it follows that it will happen again" is confirmation bias.
Also, it's not just screaming at a train. There's actual litigation right now (and potential litigation) from some big names to reign in the capitalists exploiting the lack of regulation in LLMs. Each is not necessarily for a "luddite" purpose, but collectively, the results may effectively achieve the same thing.
You're right but realistically it will fail. The voices speaking against it are few and largely marginalised, with no money or power. There will probably be regulations but it will not go away.
Right, but like I said, there's several lawsuits (and threatened lawsuits) right now that might achieve the same goals of those speaking against how it's currently used.
I don't think anyone here is arguing for LLMs to go away completely, they just want to be compensated fairly for their work (else, restrict the use of said work).
You misunderstand the Luddite movement. They weren’t anti-technology, they were anti-capitalist exploitation.
The 1810s: The Luddites act against destitution
They probably wouldn't be such a laughing stock if they were successful.
All we have are words or violence.
Oh. So modern presentation of the luddite movement is also propaganda?