488
On Exceptions (pawb.social)

Source (Bluesky)

top 50 comments
sorted by: hot top controversial new old
[-] vivalapivo@lemmy.today 17 points 2 days ago

First of all, intellectual property rights do not protect the author. I'm the author of a few papers and a book and I do not have intellectual property rights on any of these - like most of the authors I had to give them to the publishing house.

Secondly, your personal carbon footprint is bullshit.

Thirdly, everyone in the picture is an asshole.

[-] gmtom@lemmy.world 64 points 3 days ago

I work at a company that uses AI to detect repirstory ilnesses in xrays and MRI scans weeks or mobths before a human doctor could.

This work has already saved thousands of peoples lives.

But good to know you anti-AI people have your 1 dimensional, 0 nuance take on the subject and are now doing moral purity tests on it and dick measuring to see who has the loudest, most extreme hatred for AI.

[-] starman2112@sh.itjust.works 39 points 3 days ago* (last edited 3 days ago)

Nobody has a problem with this, it's generative AI that's demonic

[-] HalfSalesman@lemmy.world 11 points 2 days ago* (last edited 2 days ago)

Generative AI uses the same technology. It learns when trained on a large data set.

load more comments (3 replies)
[-] brucethemoose@lemmy.world 32 points 3 days ago* (last edited 3 days ago)

Generative AI is a meaningless buzzword for the same underlying technology, as I kinda ranted on below.

Corporate enshittification is what's demonic. When you say fuck AI, you should really mean "fuck Sam Altman"

[-] monotremata@lemmy.ca 29 points 3 days ago

I mean, not really? Maybe they're both deep learning neural architectures, but one has been trained on an entire internetful of stolen creative content and the other has been trained on ethically sourced medical data. That's a pretty significant difference.

load more comments (1 replies)
[-] AeonFelis@lemmy.world 6 points 2 days ago* (last edited 2 days ago)

Generative AI is a meaningless buzzword for the same underlying technology

What? An AI that can "detect repirstory ilnesses in xrays and MRI scans" is not generative. It does not generate anything. It's a discriminative AI. Sure, the theories behind these technologies have many things is common - but I wouldn't call them "the same underlying technology".

[-] gmtom@lemmy.world 3 points 2 days ago

It is litterally the exact same technology. If i wanted to i could turn our xray product into a image generator in less than a day.

load more comments (1 replies)
load more comments (1 replies)
load more comments (3 replies)
[-] Corelli_III@midwest.social 5 points 2 days ago

nobody is trashing Visual Machine Learning to assist in medical diagnostics

cool strawman though, i like his little hat

[-] gmtom@lemmy.world 5 points 2 days ago

No, when you litterally say "Fuck AI, no exceptions" you are very very expliticly covering all AI in that statement.

[-] Corelli_III@midwest.social 3 points 2 days ago

what do you think visual machine learning applied to medical diagnostics is exactly

does it count as "ai" if i could teach an 11th grader how to build it, because it's essentially statistically filtering legos

don't lose the thread sportschampion

[-] gmtom@lemmy.world 2 points 1 day ago

Well most of my colleagues have PHDs or MDs, so good luck teaching an 11th grader to do it.

load more comments (1 replies)
[-] brucethemoose@lemmy.world 18 points 3 days ago* (last edited 3 days ago)

All this is being stoked by OpenAI, Anthropic and such.

They want the issue to be polarized and remove any nuance, so it’s simple: use their corporate APIs, or not. Anything else is ”dangerous.”

For what they’re really scared of is awareness of locally runnable, ethical, and independent task specific tools like yours. That doesn’t make them any money. Stirring up “fuck AI” does, because that’s a battle they know they can win.

load more comments (8 replies)
[-] kartoffelsaft@programming.dev 117 points 3 days ago

I believe AI is going to be a net negative to society for the forseeable future. AI art is a blight on artistry as a concept, and LLMs are shunting us further into search-engine-overfit post-truth world.

But also:

Reading the OOP has made me a little angry. You can see the echo chamber forming right before your eyes. Either you see things the way OOP does with no nuance, or you stop following them and are left following AI hype-bros who'll accept you instead. It's disgustingly twitter-brained. It's a bullshit purity test that only serves your comfort over actually trying to convince anyone of anything.

Consider someone who has had some small but valued usage of AI (as a reverse dictionary, for example), but generally considers things like energy usage and intellectual property rights to be serious issues we have to face for AI to truly be a net good. What does that person hear when they read this post? "That time you used ChatGPT to recall the word 'verisimilar' makes you an evil person." is what they hear. And at that moment you've cut that person off from ever actually considering your opinion ever again. Even if you're right that's not healthy.

[-] BigDiction@lemmy.world 25 points 3 days ago

I’m a what most people would consider an AI Luddite/hater and think OOP communicates like a dogmatic asshole.

load more comments (6 replies)
[-] ZMoney@lemmy.world 9 points 2 days ago

So I'll be honest. I use GPT to write Python scripts for my research. I'm not a coder and I don't want to be one, but I do need to model data sometimes and I find it incredibly useful that I can tell it something in English and it can write modeling scripts in Python. It's also a great way to learn some coding basics. So please tell me why this is bad and what I should do instead.

[-] Tartas1995@discuss.tchncs.de 2 points 2 days ago

I think sometimes it is good to replace words to reevaluate a situation.

Would "I don't want to be one" be a good argument for using ai image generation?

[-] DegenerateSupreme@lemmy.zip 5 points 2 days ago

I'd say the main ethical concern at this time, regardless of harmless use cases, is the abysmal environmental impact necessary to power centralized, commercial AI models. Refer to situations like the one in Texas. A person's use of models like ChatGPT, however small, contributes to the demand for this architecture that requires incomprehensible amounts of water, while much of the world does not have enough. In classic fashion, the U.S. government is years behind on accepting what's wrong, allowing these companies to ruin communities behind a veil of hyped-up marketing about "innovation" and beating China at another dick-measuring contest.

The other concern is that ChatGPT's ability to write your Python code for data modeling is built on the hard work of programmers who will not see a cent for their contribution to the model's training. As the adage goes, "AI allows wealth to access talent, while preventing talent from accessing wealth." But since a ridiculous amount of data goes into these models, it's an amorphous ethical issue that's understandably difficult for us to contend with, because our brains struggle to comprehend so many levels of abstraction. How harmed is each individual programmer or artist? That approach ends up being meaningless, so you have to regard it more as a class-action lawsuit, where tens of thousands have been deprived as a whole.

By my measure, this AI bubble will collapse like a dying star in the next year, because the companies have no path to profitability. I hope that shifts AI development away from these environmentally-destructive practices, and eventually we'll see legislation requiring model training to be ethically sourced (Adobe is already getting ahead of the curve on this).

As for what you can do instead, people have been running local Deepseek R1 models since earlier this year, so you could follow a guide to set one up.

[-] burgerpocalyse@lemmy.world 3 points 2 days ago

this post is no man's land

[-] anarchiddy@lemmy.dbzer0.com 12 points 2 days ago

I sure am glad that we learned our lesson from the marketing campaigns in the 90's that pushed consumers to recycle their plastic single-use products to deflect attention away from the harm caused by their ubiquitous use in manufacturing.

Fuck those AI users for screwing over small creators and burning down the planet though. I see no problem with this framing.

[-] khaleer@sopuli.xyz 9 points 2 days ago* (last edited 2 days ago)

I would not to get close to bike repaired by someone who is using ai to do it. Like what the fuck xd I am not surprised he is unable to make code work then xddd

[-] kopasz7@sh.itjust.works 82 points 3 days ago

My issues are fundsmentally two fold with gen AI:

  1. Who owns and controls it (billionares and entrenched corporations)

  2. How it is shoehorned into everything (decision making processes, human-to-human communication, my coffee machine)

I cannot wait until finally the check is due and the AI bubble pops; folding this digital snake oil sellers' house of cards.

load more comments (42 replies)
[-] BradleyUffner@lemmy.world 7 points 2 days ago

They only real exception I can think of would be to train an AI ENTIRELY on your own personally created material. No sources from other people AT ALL. Used purely for personal use, not used or available for use by the public.

load more comments (6 replies)
[-] HalfSalesman@lemmy.world 6 points 2 days ago

I use LLMs in a way that reduces social anxiety from my autism, I give it details of a strange social interaction that I could not parse on my own and ask if I should worry about it, or if I should make any kind of amends or inquiries, or if I'm over thinking something and leave it alone.

I use LLMs to bounce my own ideas off of that I'm not comfortable bouncing off someone I know IRL.

I use LLM's to role play. (all kinds)

I use LLM's to find things that I can't find via conventional research methods.

And you know what, my perspective on using it for "productive/generative" usage is nuanced. I get why artists and writers are upset, however there is nothing magical about human's and their artistic abilities and in terms of material economic impacts automation of various kinds has screwed working people in the past and generally I've seen a lot less push back.

I do think that generated images and writing is pretty bland and near worthless though without a ton of human done work atm anyway. Like, sure I could generate a video of a cat dancing on a moving bus while a nuclear bomb is going off in the background or whatever wacky shit with a simple prompt but what exactly am I even going to do with that?

Highly directed AI content that includes a lot of human work tends to actually be pretty amazing IMO.

And even though all the outrage pertains to intellectual work, this technology is going to likely result in a lot of blue collar work being automated via "embodied" neural network AI's. In fact, it may be that it was needed for this kind of automation to really take off at all. Its not just white collar work. We aren't just automating slop content and corporate purposed art. The day is coming when stuff like laundry, factory/warehouse work, and kitchen work, etc. is also all being done by robots.

[-] Atlas_@lemmy.world 35 points 3 days ago

Do y'all hate chess engines?

If yes, cool.

If no, I think you hate tech companies more than you hate AI specifically.

[-] princessnorah 27 points 3 days ago* (last edited 3 days ago)

The post is pretty clearly* about genAI, I think you're just choosing to ignore that part. There's plenty of really awesome machine learning technology that helps with disabilities, doesn't rip off artists and isn't environmentally deleterious.

load more comments (7 replies)
load more comments (12 replies)
[-] Hadriscus@jlai.lu 1 points 1 day ago

Honestly I have nothing to add

[-] jsomae@lemmy.ml 5 points 2 days ago

I feel this way about people who eat meat.

[-] Evotech@lemmy.world 7 points 2 days ago

Sorry but you can deny and hate all you want, it’s not going anywhere

[-] bramkaandorp@lemmy.world 7 points 2 days ago

Neither is climate change, but we should still combat it where possible.

Funny, that. Fighting against AI could be seen as fighting against climate change, considering the large carbon footprint it has.

load more comments (6 replies)
[-] TheGuyTM3@lemmy.ml 19 points 3 days ago* (last edited 3 days ago)

I'm just sick of all this because we gave to "AI" too much meaning.

I don't like Generative AI tools like LLMs, image generators, voice, video etc because i see no interests in that, I think they give bad habits, and they are not understood well by their users.

Yesterday again i had to correct my mother because she told me some fun fact she had learnt by chatGPT, (that was wrong), and she refused to listen to me because "ChatGPT do plenty of researches on the net so it should know better than you".

About the thing that "it will replace artists and destroy art industry", I don't believe in that, (even if i made the choice to never use it), because it will forever be a tool. It's practical if you want a cartoony monkey image for your article (you meanie stupid journalist) but you can't say "make me a piece of art" and then put it on a museum.

Making art myself, i hate Gen AI slop from the deep of my heart but i'm obligated to admit that. (Let's not forget how it trains on copirighted media, use shitton of energy, and give no credits)

AI in others fields, like medecine, automatic subtitles, engineering, is fine for me. It won't give bad habits, it is well understood by its users, and it is truly benefical, as in being more efficient to save lifes than humans, or simply being helpful to disabled people.

TL,DR AI in general is a tool. Gen AI is bad as a powerful tool for everyone's use like it is bad to give to everyone an helicopter (even if it improves mobility). AI is nonetheless a very nice tool that can save lifes and help disabled peoples IF used and understood correctly and fairly.

[-] stabby_cicada@slrpnk.net 7 points 2 days ago

AI in others fields, like medecine, automatic subtitles, engineering, is fine for me. It won't give bad habits, it is well understood by its users, and it is truly benefical, as in being more efficient to save lifes than humans, or simply being helpful to disabled people.

I think the generative AI tech bros have deliberately contributed to a lot of confusion by calling all machine learning algorithms "AI".

I mean, you have some software which both works and is socially beneficial, like translation and speech recognition software.

You have some software that works, and is incredibly dangerous because it works, like facial recognition and all the horrible ways authoritarian governments can exploit it.

And then you have some software that "works" to produce socially detrimental bullshit, like generative AI.

All three of these categories use machine learning algorithms, trained on data sets to recognize and produce patterns. But they aren't the same in any other meaningful sense. Calling them all "AI" does nothing but confuse the issue.

load more comments (1 replies)
[-] ruuster13@lemmy.zip 17 points 3 days ago

AI is a marketing term. Big Tech stole ALL data. All of it. The brazen piracy is a sign they feel untouchable. We should touch them.

[-] theunknownmuncher@lemmy.world 38 points 3 days ago* (last edited 3 days ago)

the fact that it is theft

There are LLMs trained using fully open datasets that do not contain proprietary material... (CommonCorpus dataset, OLMo)

the fact that it is environmentally harmful

There are LLMs trained with minimal power (typically the same ones as above as these projects cannot afford as much resources), and local LLMs use signiciantly less power than a toaster or microwave...

the fact that it cuts back on critical, active thought

This is a usecase problem. LLMs aren't suitable for critical thinking or decision making tasks, so if it's cutting back on your "critical, active thought" you're just using it wrong anyway...

The OOP genuinely doesn't know what they're talking about and are just reacting to sensationalized rage bait on the internet lmao

load more comments (15 replies)
[-] Limonene@lemmy.world 30 points 3 days ago

Generative AI and their outputs are derived products of their training data. I mean this ethically, not legally; I'm not a copyright lawyer.

Using the output for personal viewing (advice, science questions, or jacking off to AI porn you requested) is weird but ethical. It's equivalent to pirating a movie to watch at home.

But as soon as you show someone else the output, I consider it theft without attribution. If you generate a meme image, you're failing to attribute the artists whose work trained the AI without permission. If you generate code, that code infringes the numerous open source licenses of the training data, by failing to attribute it.

Even a simple lemmy text post generated by AI is derived from thousands of unattributed novels.

load more comments (12 replies)
load more comments
view more: next ›
this post was submitted on 03 Aug 2025
488 points (100.0% liked)

Fuck AI

3642 readers
674 users here now

"We did it, Patrick! We made a technological breakthrough!"

A place for all those who loathe AI to discuss things, post articles, and ridicule the AI hype. Proud supporter of working people. And proud booer of SXSW 2024.

founded 1 year ago
MODERATORS