492
On Exceptions (pawb.social)

Source (Bluesky)

top 50 comments
sorted by: hot top controversial new old
[-] kartoffelsaft@programming.dev 117 points 4 months ago

I believe AI is going to be a net negative to society for the forseeable future. AI art is a blight on artistry as a concept, and LLMs are shunting us further into search-engine-overfit post-truth world.

But also:

Reading the OOP has made me a little angry. You can see the echo chamber forming right before your eyes. Either you see things the way OOP does with no nuance, or you stop following them and are left following AI hype-bros who'll accept you instead. It's disgustingly twitter-brained. It's a bullshit purity test that only serves your comfort over actually trying to convince anyone of anything.

Consider someone who has had some small but valued usage of AI (as a reverse dictionary, for example), but generally considers things like energy usage and intellectual property rights to be serious issues we have to face for AI to truly be a net good. What does that person hear when they read this post? "That time you used ChatGPT to recall the word 'verisimilar' makes you an evil person." is what they hear. And at that moment you've cut that person off from ever actually considering your opinion ever again. Even if you're right that's not healthy.

[-] BigDiction@lemmy.world 26 points 4 months ago

I’m a what most people would consider an AI Luddite/hater and think OOP communicates like a dogmatic asshole.

[-] azertyfun@sh.itjust.works 14 points 4 months ago

You can also be right for the wrong reasons. You see that a lot in the anti-AI echo chambers, people who never gave a shit about IP law suddenly pretending that they care about copyright, the whole water use thing which is closer to myth than fact, or discussions on energy usage in general.

Everyone can pick up on the vibes being off with the mainstream discourse around AI, but many can't properly articulate why and they solve that cognitive dissonance with made-up or comforting bullshit.

This makes me quite uncomfortable because that's the exact same pattern of behavior we see from reactionaries, except that what weirds them out for reasons they can't or won't say explicitly isn't tech bros but immigrants and queer people.

load more comments (3 replies)
load more comments (2 replies)
[-] kopasz7@sh.itjust.works 82 points 4 months ago

My issues are fundsmentally two fold with gen AI:

  1. Who owns and controls it (billionares and entrenched corporations)

  2. How it is shoehorned into everything (decision making processes, human-to-human communication, my coffee machine)

I cannot wait until finally the check is due and the AI bubble pops; folding this digital snake oil sellers' house of cards.

[-] BlameTheAntifa@lemmy.world 22 points 4 months ago* (last edited 4 months ago)

When generative AI was first taking off, I saw it as something that could empower regular people to do things that they otherwise could not afford to. The problem, as is always the case, is capitalism immediately turned into a tool of theft and abuse. The theft of training data, the power requirements, selling it for profit, competing against those whose creations were used for training without permission or attribution, the unreliability and untrustworthiness, so many ethical and technical problems.

I still don’t have a problem with using the corpus of all human knowledge for machine learning, in theory, but we’ve ended up heading in a horrible, dystopian direction that will have no good outcomes. As we hurtle toward corporate controlled AGI with no ethical or regulatory guardrails, we are racing toward a scenario where we will be slavers or extinct, and possibly both.

load more comments (2 replies)
[-] iAmTheTot@sh.itjust.works 12 points 4 months ago

You really take no issue with how they were all trained?

[-] storm 13 points 4 months ago

*Not op but still gonna reply. Not really? The notion that someone can own (and be entitled to control) a portion of culture is absurd. It's very frustrating to see so many people take issue with AI as "theft" as if intellectual property were something that we should support and defend instead of being the actual tool for stealing artists work ("Property is theft" and all such). And obviously data centers are not built to be environmentally sustainable (not an expert, but I assume this could be done if they cared to do so). That said, using AI to do art so humans can work is the absolute peek of a stupid fucking ideas.

load more comments (3 replies)
[-] baahb@lemmy.dbzer0.com 10 points 4 months ago

The way they were trained is the way they were trained.

I dont mean to say that the ethics dont matter, but you are talking as though this isnt already present tense.

The only way to go back is basically a global EMP.

What so you actually propose that is a realistic response?

This is an actual question. To this point the only advice I've seen to come from the anti-ai crowd is "dont use it. Its bad!" And that is simply not practical.

You all sound like the people who think we are actually able to get rid of guns entirely.

[-] iAmTheTot@sh.itjust.works 16 points 4 months ago

I'm not sure your "this is the present" argument holds much water with me. If someone stole my work and made billions off it, I'd want justice whether it was one day or one decade later.

I also don't think "this is the way it is, suck it up" is a good argument in general. Nothing would ever improve if everyone thought like that.

Also, not practical? I don't use genAI and I'm getting along just fine.

[-] queermunist@lemmy.ml 15 points 4 months ago

Okay, you know those gigantic data centers that are being built that are using all our water and electricity?

Stop building them.

Seems easy.

load more comments (21 replies)
load more comments (9 replies)
load more comments (1 replies)
[-] gmtom@lemmy.world 64 points 4 months ago

I work at a company that uses AI to detect repirstory ilnesses in xrays and MRI scans weeks or mobths before a human doctor could.

This work has already saved thousands of peoples lives.

But good to know you anti-AI people have your 1 dimensional, 0 nuance take on the subject and are now doing moral purity tests on it and dick measuring to see who has the loudest, most extreme hatred for AI.

[-] starman2112@sh.itjust.works 39 points 4 months ago* (last edited 4 months ago)

Nobody has a problem with this, it's generative AI that's demonic

[-] brucethemoose@lemmy.world 32 points 4 months ago* (last edited 4 months ago)

Generative AI is a meaningless buzzword for the same underlying technology, as I kinda ranted on below.

Corporate enshittification is what's demonic. When you say fuck AI, you should really mean "fuck Sam Altman"

[-] monotremata@lemmy.ca 29 points 4 months ago

I mean, not really? Maybe they're both deep learning neural architectures, but one has been trained on an entire internetful of stolen creative content and the other has been trained on ethically sourced medical data. That's a pretty significant difference.

load more comments (1 replies)
load more comments (4 replies)
[-] HalfSalesman@lemmy.world 11 points 4 months ago* (last edited 4 months ago)

Generative AI uses the same technology. It learns when trained on a large data set.

load more comments (3 replies)
load more comments (3 replies)
[-] brucethemoose@lemmy.world 18 points 4 months ago* (last edited 4 months ago)

All this is being stoked by OpenAI, Anthropic and such.

They want the issue to be polarized and remove any nuance, so it’s simple: use their corporate APIs, or not. Anything else is ”dangerous.”

For what they’re really scared of is awareness of locally runnable, ethical, and independent task specific tools like yours. That doesn’t make them any money. Stirring up “fuck AI” does, because that’s a battle they know they can win.

[-] ysjet@lemmy.world 10 points 4 months ago

Those are not GPTs or LLMs. Fuck off with your bullshit trying to conflate the two.

[-] gmtom@lemmy.world 18 points 4 months ago

We actually do use Generative Pre-trained Transformers as the base for a lot of our tech. So yes they are GPTs.

And even if they werent GPTs this is a post saying all AI is bad and how there is literally no exceptions to that.

load more comments (3 replies)
load more comments (8 replies)
[-] theunknownmuncher@lemmy.world 38 points 4 months ago* (last edited 4 months ago)

the fact that it is theft

There are LLMs trained using fully open datasets that do not contain proprietary material... (CommonCorpus dataset, OLMo)

the fact that it is environmentally harmful

There are LLMs trained with minimal power (typically the same ones as above as these projects cannot afford as much resources), and local LLMs use signiciantly less power than a toaster or microwave...

the fact that it cuts back on critical, active thought

This is a usecase problem. LLMs aren't suitable for critical thinking or decision making tasks, so if it's cutting back on your "critical, active thought" you're just using it wrong anyway...

The OOP genuinely doesn't know what they're talking about and are just reacting to sensationalized rage bait on the internet lmao

[-] csh83669@programming.dev 19 points 4 months ago

Saying it uses less power that a toaster is not much. Yes, it uses less power than a thing that literally turns electricity into pure heat… but that’s sort of a requirement for toast. That’s still a LOT of electricity. And it’s not required. People don’t need to burn down a rainforest to summarize a meeting. Just use your earballs.

[-] theunknownmuncher@lemmy.world 9 points 4 months ago* (last edited 4 months ago)

Saying it uses less power that a toaster is not much

Yeah but we're talking a fraction of 1%. A toaster uses 800-1500 watts for minutes, local LLM uses <300 watts for seconds. I toast something almost every day. I'd need to prompt a local LLM literally hundreds of times per day for AI to have a higher impact on the environment than my breakfast, only considering the toasting alone. I make probably around a dozen-ish prompts per week on average.

That’s still a LOT of electricity.

That's exactly my point, thanks. All kinds of appliances use loads more power than AI. We run them without thinking twice, and there's no anti-toaster movement on the internet claiming there is no ethical toast and you're an asshole for making toast without exception. If a toaster uses a ton of electricity and is acceptable, while a local LLM uses less than 1% of that, then there is no argument to be made against local LLMs on the basis of electricity use.

Your argument just doesn't hold up and could be applied to literally anything that isn't "required". Toast isn't required, you just want it. People could just stop playing video games to save more electricity, video games aren't required. People could stop using social media to save more electricity, TikTok and YouTube's servers aren't required.

People don’t need to burn down a rainforest to summarize a meeting.

Strawman

[-] PixelatedSaturn@lemmy.world 8 points 4 months ago

That's nothing. People aren't required to eat so much meat, it even eat so much food.

I also don't like this energy argument of anti ai, when everything else in our lives already consumes so much.

load more comments (2 replies)
load more comments (2 replies)
load more comments (1 replies)
[-] hpx9140@fedia.io 14 points 4 months ago

You're implying the edge cases you presented are the majority being used?

[-] theunknownmuncher@lemmy.world 21 points 4 months ago* (last edited 4 months ago)

No, and that's irrelevant. Their post is explicitly not about the majority, but about exceptions/edge cases.

I am responding to what they posted (I even quoted them), showing that the position that "there is no ethical use for generative AI" and that there are no exceptions is provably false.

I didn't think it needed to be said because it's not relevant to this discussion, but: the majority of AI sucks on all fronts. It's bad for intellectual property, it's bad for the environment, it's bad for privacy, it's bad for people's brains, and it's bad at what it's used for.

All of these problems are not inherent to AI itself, and instead are problems with the massive short-term-profit-seeking corporations flush with unimaginable amounts of investor cash (read: unimaginable expectations and promises that they can't meet) that control the majority of AI. Once again capitalism is the real culprit, and fools like the OOP will do these strawman mental gymnastics and spread misinformation to defend capitalism at all costs.

load more comments (4 replies)
load more comments (1 replies)
[-] Atlas_@lemmy.world 35 points 4 months ago

Do y'all hate chess engines?

If yes, cool.

If no, I think you hate tech companies more than you hate AI specifically.

[-] princessnorah 27 points 4 months ago* (last edited 4 months ago)

The post is pretty clearly* about genAI, I think you're just choosing to ignore that part. There's plenty of really awesome machine learning technology that helps with disabilities, doesn't rip off artists and isn't environmentally deleterious.

[-] brucethemoose@lemmy.world 13 points 4 months ago* (last edited 4 months ago)

The distinction between AI and GenAI is meaningless; they are buzzwords for the same underlying tech.

So is trying to bucket them based on copyright violation: there are very powerful, open dataset, more or less reproducible LLMs trained and runnable on a trivial amount of electricity you can run on your own PC right now.

Same with use cases. One can use embeddings models or tiny resnets to kill. People do, in fact, like with Palantir's generative free recognition models. At the other extreme, LLMs can be totally task focused and useless at anything else.

The distinction is corporate/enshittified vs not. Like Reddit vs Lemmy.

[-] starman2112@sh.itjust.works 18 points 4 months ago* (last edited 4 months ago)

The distinction between AI and GenAI is meaningless; they are buzzwords for the same underlying tech.

You know this is a stupid take, right? You know that chatgpt and Stockfish, while both being forms of "artificial intelligence," are wildly incomparable, yeah? This is like saying "the distinction between an ICBM and the Saturn-V is meaningless, because they both use the same underlying tech"

load more comments (1 replies)
[-] Probius@sopuli.xyz 9 points 4 months ago* (last edited 4 months ago)

That first claim makes no sense and you make no argument to back it up. The distinction is actually quite meaningful; generative AI generates new samples from an existing distribution, be it text, audio, images, or anything else. Other forms of AI solve numerous problems in different ways, such as identifying patterns we can't or inventing novel and more optimal solutions.

load more comments (3 replies)
[-] theunknownmuncher@lemmy.world 20 points 4 months ago

Yup, as always, none of these problems are inherent to AI itself, they're all problems with capitalism.

[-] Randomgal@lemmy.ca 8 points 4 months ago

I had to check to make sure I was in the right app. Rational discussion on my Lemmy? No way.

But yes. The machine can't take responsibility for shit. You hate the people and what they are doing to you. If AI didn't exist, they do it somehow else

load more comments (6 replies)
load more comments (3 replies)
load more comments (1 replies)
[-] Limonene@lemmy.world 30 points 4 months ago

Generative AI and their outputs are derived products of their training data. I mean this ethically, not legally; I'm not a copyright lawyer.

Using the output for personal viewing (advice, science questions, or jacking off to AI porn you requested) is weird but ethical. It's equivalent to pirating a movie to watch at home.

But as soon as you show someone else the output, I consider it theft without attribution. If you generate a meme image, you're failing to attribute the artists whose work trained the AI without permission. If you generate code, that code infringes the numerous open source licenses of the training data, by failing to attribute it.

Even a simple lemmy text post generated by AI is derived from thousands of unattributed novels.

load more comments (12 replies)
[-] TheGuyTM3@lemmy.ml 19 points 4 months ago* (last edited 4 months ago)

I'm just sick of all this because we gave to "AI" too much meaning.

I don't like Generative AI tools like LLMs, image generators, voice, video etc because i see no interests in that, I think they give bad habits, and they are not understood well by their users.

Yesterday again i had to correct my mother because she told me some fun fact she had learnt by chatGPT, (that was wrong), and she refused to listen to me because "ChatGPT do plenty of researches on the net so it should know better than you".

About the thing that "it will replace artists and destroy art industry", I don't believe in that, (even if i made the choice to never use it), because it will forever be a tool. It's practical if you want a cartoony monkey image for your article (you meanie stupid journalist) but you can't say "make me a piece of art" and then put it on a museum.

Making art myself, i hate Gen AI slop from the deep of my heart but i'm obligated to admit that. (Let's not forget how it trains on copirighted media, use shitton of energy, and give no credits)

AI in others fields, like medecine, automatic subtitles, engineering, is fine for me. It won't give bad habits, it is well understood by its users, and it is truly benefical, as in being more efficient to save lifes than humans, or simply being helpful to disabled people.

TL,DR AI in general is a tool. Gen AI is bad as a powerful tool for everyone's use like it is bad to give to everyone an helicopter (even if it improves mobility). AI is nonetheless a very nice tool that can save lifes and help disabled peoples IF used and understood correctly and fairly.

load more comments (2 replies)
[-] vivalapivo@lemmy.today 18 points 4 months ago

First of all, intellectual property rights do not protect the author. I'm the author of a few papers and a book and I do not have intellectual property rights on any of these - like most of the authors I had to give them to the publishing house.

Secondly, your personal carbon footprint is bullshit.

Thirdly, everyone in the picture is an asshole.

[-] ruuster13@lemmy.zip 17 points 4 months ago

AI is a marketing term. Big Tech stole ALL data. All of it. The brazen piracy is a sign they feel untouchable. We should touch them.

[-] dandelion 17 points 4 months ago

I do use AI (mostly like Google), but I don't think it's justified or OK, lol - I'm the problem, and I know it.

load more comments (5 replies)
[-] PixelatedSaturn@lemmy.world 17 points 4 months ago

I like to read the anti ai stuff, because ultimativly a lot of criticism is valid. But by god is there a lot of adolescent whining and hyperbole.

[-] kibiz0r@midwest.social 16 points 4 months ago* (last edited 4 months ago)

It’s so surreal when someone posts a meme about That Guy™ doing That Thing™ and then all of a sudden That Guy™ shows up in the comments, doing That Thing™

Like, can I get your autograph? You’re famous, bro!

load more comments (5 replies)
[-] anarchiddy@lemmy.dbzer0.com 12 points 4 months ago

I sure am glad that we learned our lesson from the marketing campaigns in the 90's that pushed consumers to recycle their plastic single-use products to deflect attention away from the harm caused by their ubiquitous use in manufacturing.

Fuck those AI users for screwing over small creators and burning down the planet though. I see no problem with this framing.

[-] ZMoney@lemmy.world 9 points 4 months ago

So I'll be honest. I use GPT to write Python scripts for my research. I'm not a coder and I don't want to be one, but I do need to model data sometimes and I find it incredibly useful that I can tell it something in English and it can write modeling scripts in Python. It's also a great way to learn some coding basics. So please tell me why this is bad and what I should do instead.

load more comments (2 replies)
[-] khaleer@sopuli.xyz 9 points 4 months ago* (last edited 4 months ago)

I would not to get close to bike repaired by someone who is using ai to do it. Like what the fuck xd I am not surprised he is unable to make code work then xddd

[-] pixxelkick@lemmy.world 8 points 4 months ago* (last edited 4 months ago)

AI saved my pets life. You won't convince me it's 100% all bad and there's no "right" way to use it.

The way it is trained isnt intellectual theft imo.

It only becomes intellectual theft if it is used to generate something that then competes with and takes away profits from the original creators.

Thus the intellectual theft only kicks in at generation time, but the onus is still on the AI owners for not preventing it

However if I use AI to generate anything that doesn't "compete" with anyone, then "intellectual theft" doesn't matter.

For example, when I used it to assist with diagnosing a serious issue my pet was having 2 months ago that was stumping even our vet and it got the answer right, which surprised our vet when we asked them to check a very esoteric possibility (which they dubious checked and then they were shocked to find something there.

They asked us how on earth we managed to guess to check that place of all things, how could we have known. As a result we caught the issue very early when it was easy to treat and saved our pets life

It was a gallbladder infection, and her symptoms had like 20 other more likely causes individually.

But when I punched all her symptoms into GPT, everytime, it asserted if was likely the gallbladder. It had found some papers on other animals and mammals and how gallbladder infections cause that specific combo of symptoms rarely, and encouraged us to check it out.

If you think "intellectual theft" still applies here, despite it being used to save an animals life, then you are the asshole. No one "lost" profit or business to this, no one's intellectual property was infringed, and consuming the same amount of power it takes to cook 1 pizza in my oven to save my pets life is a pretty damn good trade, in my opinion.

So, yes. I think I used AI ethically there. Fight me.

[-] jjjalljs@ttrpg.network 9 points 4 months ago

Regular search could have also surfaced that information

load more comments (9 replies)
load more comments (4 replies)
load more comments
view more: next ›
this post was submitted on 03 Aug 2025
492 points (100.0% liked)

Fuck AI

4855 readers
1115 users here now

"We did it, Patrick! We made a technological breakthrough!"

A place for all those who loathe AI to discuss things, post articles, and ridicule the AI hype. Proud supporter of working people. And proud booer of SXSW 2024.

AI, in this case, refers to LLMs, GPT technology, and anything listed as "AI" meant to increase market valuations.

founded 2 years ago
MODERATORS