1216
(page 2) 50 comments
sorted by: hot top controversial new old
[-] aramis87@fedia.io 158 points 1 week ago

The biggest problem with AI is that they're illegally harvesting everything they can possibly get their hands on to feed it, they're forcing it into places where people have explicitly said they don't want it, and they're sucking up massive amounts of energy AMD water to create it, undoing everyone else's progress in reducing energy use, and raising prices for everyone else at the same time.

Oh, and it also hallucinates.

[-] pennomi@lemmy.world 46 points 1 week ago

Eh I’m fine with the illegal harvesting of data. It forces the courts to revisit the question of what copyright really is and hopefully erodes the stranglehold that copyright has on modern society.

Let the companies fight each other over whether it’s okay to pirate every video on YouTube. I’m waiting.

[-] catloaf@lemm.ee 69 points 1 week ago

So far, the result seems to be "it's okay when they do it"

load more comments (1 replies)
[-] Electricblush@lemmy.world 31 points 1 week ago* (last edited 1 week ago)

I would agree with you if the same companies challenging copyright (protecting the intellectual and creative work of "normies") are not also aggressively welding copyright against the same people they are stealing from.

With the amount of coprorate power tightly integrated with the governmental bodies in the US (and now with Doge dismantling oversight) I fear that whatever comes out of this is humans own nothing, corporations own everything. Death of free independent thought and creativity.

Everything you do, say and create is instantly marketable, sellable by the major corporations and you get nothing in return.

The world needs something a lot more drastic then a copyright reform at this point.

[-] naught@sh.itjust.works 16 points 1 week ago

AI scrapers illegally harvesting data are destroying smaller and open source projects. Copyright law is not the only victim

https://thelibre.news/foss-infrastructure-is-under-attack-by-ai-companies/

[-] riskable@programming.dev 24 points 1 week ago* (last edited 1 week ago)

They're not illegally harvesting anything. Copyright law is all about distribution. As much as everyone loves to think that when you copy something without permission you're breaking the law the truth is that you're not. It's only when you distribute said copy that you're breaking the law (aka violating copyright).

All those old school notices (e.g. "FBI Warning") are 100% bullshit. Same for the warning the NFL spits out before games. You absolutely can record it! You just can't share it (or show it to more than a handful of people but that's a different set of laws regarding broadcasting).

I download AI (image generation) models all the time. They range in size from 2GB to 12GB. You cannot fit the petabytes of data they used to train the model into that space. No compression algorithm is that good.

The same is true for LLM, RVC (audio models) and similar models/checkpoints. I mean, think about it: If AI is illegally distributing millions of copyrighted works to end users they'd have to be including it all in those files somehow.

Instead of thinking of an AI model like a collection of copyrighted works think of it more like a rough sketch of a mashup of copyrighted works. Like if you asked a person to make a Godzilla-themed My Little Pony and what you got was that person's interpretation of what Godzilla combined with MLP would look like. Every artist would draw it differently. Every author would describe it differently. Every voice actor would voice it differently.

Those differences are the equivalent of the random seed provided to AI models. If you throw something at a random number generator enough times you could--in theory--get the works of Shakespeare. Especially if you ask it to write something just like Shakespeare. However, that doesn't meant the AI model literally copied his works. It's just doing it's best guess (it's literally guessing! That's how work!).

[-] natecox@programming.dev 19 points 1 week ago

The problem with being like… super pedantic about definitions, is that you often miss the forest for the trees.

Illegal or not, seems pretty obvious to me that people saying illegal in this thread and others probably mean “unethically”… which is pretty clearly true.

[-] riskable@programming.dev 11 points 1 week ago* (last edited 1 week ago)

I wasn't being pedantic. It's a very fucking important distinction.

If you want to say "unethical" you say that. Law is an orthogonal concept to ethics. As anyone who's studied the history of racism and sexism would understand.

Furthermore, it's not clear that what Meta did actually was unethical. Ethics is all about how human behavior impacts other humans (or other animals). If a behavior has a direct negative impact that's considered unethical. If it has no impact or positive impact that's an ethical behavior.

What impact did OpenAI, Meta, et al have when they downloaded these copyrighted works? They were not read by humans--they were read by machines.

From an ethics standpoint that behavior is moot. It's the ethical equivalent of trying to measure the environmental impact of a bit traveling across a wire. You can go deep down the rabbit hole and calculate the damage caused by mining copper and laying cables but that's largely a waste of time because it completely loses the narrative that copying a billion books/images/whatever into a machine somehow negatively impacts humans.

It is not the copying of this information that matters. It's the impact of the technologies they're creating with it!

That's why I think it's very important to point out that copyright violation isn't the problem in these threads. It's a path that leads nowhere.

[-] selokichtli@lemmy.ml 6 points 1 week ago

Just so you know, still pedantic.

[-] natecox@programming.dev 1 points 6 days ago

The irony of choosing the most pedantic way of saying that they’re not pedantic is pretty amusing though.

[-] Gerudo@lemm.ee 8 points 1 week ago

The issue I see is that they are using the copyrighted data, then making money off that data.

load more comments (5 replies)
load more comments (3 replies)
[-] wewbull@feddit.uk 13 points 1 week ago

Oh, and it also hallucinates.

Oh, and people believe the hallucinations.

[-] Sl00k@programming.dev 8 points 1 week ago

I see the "AI is using up massive amounts of water" being proclaimed everywhere lately, however I do not understand it, do you have a source?

My understanding is this probably stems from people misunderstanding data center cooling systems. Most of these systems are closed loop so everything will be reused. It makes no sense to "burn off" water for cooling.

[-] lime@feddit.nu 11 points 1 week ago* (last edited 1 week ago)

data centers are mainly air-cooled, and two innovations contribute to the water waste.

the first one was "free cooling", where instead of using a heat exchanger loop you just blow (filtered) outside air directly over the servers and out again, meaning you don't have to "get rid" of waste heat, you just blow it right out.

the second one was increasing the moisture content of the air on the way in with what is basically giant carburettors in the air stream. the wetter the air, the more heat it can take from the servers.

so basically we now have data centers designed like cloud machines.

Edit: Also, apparently the water they use becomes contaminated and they use mainly potable water. here's a paper on it

load more comments (2 replies)
[-] Sturgist@lemmy.ca 7 points 1 week ago

Oh, and it also hallucinates.

This is arguably a feature depending on how you use it. I'm absolutely not an AI acolyte. It's highly problematic in every step. Resource usage. Training using illegally obtained information. This wouldn't necessarily be an issue if people who aren't tech broligarchs weren't routinely getting their lives destroyed for this, and if the people creating the material being used for training also weren't being fucked....just capitalism things I guess. Attempts by capitalists to cut workers out of the cost/profit equation.

If you're using AI to make music, images or video... you're depending on those hallucinations.
I run a Stable Diffusion model on my laptop. It's kinda neat. I don't make things for a profit, and now that I've played with it a bit I'll likely delete it soon. I think there's room for people to locally host their own models, preferably trained with legally acquired data, to be used as a tool to assist with the creative process. The current monetisation model for AI is fuckin criminal....

[-] atrielienz@lemmy.world 4 points 1 week ago

Tell that to the man who was accused by Gen AI of having murdered his children.

[-] Sturgist@lemmy.ca 10 points 1 week ago

Ok? If you read what I said, you'll see that I'm not talking about using ChatGPT as an information source. I strongly believe that using LLMs as a search tool is incredibly stupid....for exactly reasons like it being so very confident when relaying inaccurate or completely fictional information.
What I was trying to say, and I get that I may not have communicated that very well, was that Generative Machine Learning Algorithms might find a niche as creative process assistant tools. Not as a way to search for publicly available information on your neighbour or boss or partner. Not as a way to search for case law while researching the defence of your client in a lawsuit. And it should never be relied on to give accurate information about what colour the sky is, or the best ways to make a custard using gasoline.

Does that clarify things a bit? Or do you want to carry on using an LLM in a way that has been shown to be unreliable, at best, as some sort of gotcha...when I wasn't talking about that as a viable use case?

load more comments (4 replies)
[-] index@sh.itjust.works 5 points 1 week ago

We spend energy on the most useless shit why are people suddenly using it as an argument against AI? You ever saw someone complaining about pixar wasting energies to render their movies? Or 3D studios to render TV ads?

load more comments (4 replies)
[-] kibiz0r@midwest.social 31 points 1 week ago

Idk if it’s the biggest problem, but it’s probably top three.

Other problems could include:

  • Power usage
  • Adding noise to our communication channels
  • AGI fears if you buy that (I don’t personally)
[-] pennomi@lemmy.world 17 points 1 week ago

Dead Internet theory has never been a bigger threat. I believe that’s the number one danger - endless quantities of advertising and spam shoved down our throats from every possible direction.

[-] Fingolfinz@lemmy.world 5 points 1 week ago

We’re pretty close to it, most videos on YouTube and websites that exist are purely just for some advertiser to pay that person for a review or recommendation

load more comments (2 replies)
[-] MyOpinion@lemm.ee 29 points 1 week ago

The problem with AI is that it pirates everyone’s work and then repackages it as its own and enriches the people that did not create the copywrited work.

[-] lobut@lemmy.ca 20 points 1 week ago

I mean, it's our work the result should belong to the people.

[-] piecat@lemmy.world 7 points 1 week ago

This is where "universal basic income" comes into play

load more comments (2 replies)
[-] Aux@feddit.uk 4 points 1 week ago

That's what all artists have done since the dawn of ages.

[-] ElPussyKangaroo@lemmy.world 28 points 1 week ago

Truer words have never been said.

[-] Grimy@lemmy.world 18 points 1 week ago

AI has a vibrant open source scene and is definitely not owned by a few people.

A lot of the data to train it is only owned by a few people though. It is record companies and publishing houses winning their lawsuits that will lead to dystopia. It's a shame to see so many actually cheering them on.

[-] RadicalEagle@lemmy.world 16 points 1 week ago

I’d say the biggest problem with AI is that it’s being treated as a tool to displace workers, but there is no system in place to make sure that that “value” (I’m not convinced commercial AI has done anything valuable) created by AI is redistributed to the workers that it has displaced.

[-] protist@mander.xyz 13 points 1 week ago

Welcome to every technological advancement ever applied to the workforce

load more comments (1 replies)
[-] WrenFeathers@lemmy.world 14 points 1 week ago

The biggest problem with AI is the damage it’s doing to human culture.

[-] PostiveNoise@kbin.melroy.org 13 points 1 week ago

Either the article editing was horrible, or Eno is wildly uniformed about the world. Creation of AIs is NOT the same as social media. You can't blame a hammer for some evil person using it to hit someone in the head, and there is more to 'hammers' than just assaulting people.

[-] andros_rex@lemmy.world 6 points 1 week ago* (last edited 1 week ago)

Eno does strike me as the kind of person who could use AI effectively as a tool for making music. I don’t think he’s team “just generate music with a single prompt and dump it onto YouTube” (AI has ruined study lo fi channels) - the stuff at the end about distortion is what he’s interested in experimenting with.

There is a possibility for something interesting and cool there (I think about how Chuck Pearson’s eccojams is just like short loops of random songs repeated in different ways, but it’s an absolutely revolutionary album) even if in effect all that’s going to happen is music execs thinking they can replace songwriters and musicians with “hey siri, generate a pop song with a catchy chorus” while talentless hacks inundate YouTube and bandcamp with shit.

load more comments (5 replies)
[-] TheMightyCat@lemm.ee 12 points 1 week ago

No?

Anyone can run an AI even on the weakest hardware there are plenty of small open models for this.

Training an AI requires very strong hardware, however this is not an impossible hurdle as the models on hugging face show.

[-] nalinna@lemmy.world 7 points 1 week ago

But the people with the money for the hardware are the ones training it to put more money in their pockets. That's mostly what it's being trained to do: make rich people richer.

[-] riskable@programming.dev 6 points 1 week ago

This completely ignores all the endless (open) academic work going on in the AI space. Loads of universities have AI data centers now and are doing great research that is being published out in the open for anyone to use and duplicate.

I've downloaded several academic models and all commercial models and AI tools are based on all that public research.

I run AI models locally on my PC and you can too.

load more comments (1 replies)
[-] TheMightyCat@lemm.ee 5 points 1 week ago

But you can make this argument for anything that is used to make rich people richer. Even something as basic as pen and paper is used everyday to make rich people richer.

Why attack the technology if its the rich people you are against and not the technology itself.

load more comments (1 replies)
load more comments (1 replies)
[-] CodeInvasion@sh.itjust.works 6 points 1 week ago

Yah, I'm an AI researcher and with the weights released for deep seek anybody can run an enterprise level AI assistant. To run the full model natively, it does require $100k in GPUs, but if one had that hardware it could easily be fine-tuned with something like LoRA for almost any application. Then that model can be distilled and quantized to run on gaming GPUs.

It's really not that big of a barrier. Yes, $100k in hardware is, but from a non-profit entity perspective that is peanuts.

Also adding a vision encoder for images to deep seek would not be theoretically that difficult for the same reason. In fact, I'm working on research right now that finds GPT4o and o1 have similar vision capabilities, implying it's the same first layer vision encoder and then textual chain of thought tokens are read by subsequent layers. (This is a very recent insight as of last week by my team, so if anyone can disprove that, I would be very interested to know!)

load more comments (2 replies)
load more comments (1 replies)
[-] Grandwolf319@sh.itjust.works 10 points 1 week ago

The biggest problem with AI is that it’s the brut force solution to complex problems.

Instead of trying to figure out what’s the most power efficient algorithm to do artificial analysis, they just threw more data and power at it.

Besides the fact of how often it’s wrong, by definition, it won’t ever be as accurate nor efficient as doing actual thinking.

It’s the solution you come up with the last day before the project is due cause you know it will technically pass and you’ll get a C.

load more comments (1 replies)
[-] beto@lemmy.studio 8 points 1 week ago

And yet, he released his latest album exclusively on Apple Music.

[-] canajac@lemmy.ca 7 points 1 week ago

AI will become one of the most important discoveries humankind has ever invented. Apply it to healthcare, science, finances, and the world will become a better place, especially in healthcare. Hey artist, writers, you cannot stop intellectual evolution. AI is here to stay. All we need is a proven way to differentiate the real art from AI art. An invisible watermark that can be scanned to see its true "raison d'etre". Sorry for going off topic but I agree that AI should be more open to verification for using copyrighted material. Don't expect compensation though.

[-] Ledericas@lemm.ee 1 points 6 days ago

None of it is currently useful to those right now

[-] jjjalljs@ttrpg.network 1 points 6 days ago

Apply it to healthcare, science, finances, and the world will become a better place, especially in healthcare.

That's all kind of moot if we continue down the capitalist hellscape express. What good is an AI that can diagnose cancer if most people can't afford access? What good is AI writing novels if our homes are destroyed by climate change induced disasters?

Those problems are mostly political, and AI isn't going to fix them. The people that probably could be replaced with AI, the shitty "leaders" and such, are not going to voluntarily step down.

[-] iAvicenna@lemmy.world 6 points 1 week ago

like most of money

[-] HANN@sh.itjust.works 4 points 1 week ago

Ollama and stable diffusion are free open source software. Nobody is forcing anybody to use chatGPT

[-] afk_strats@lemmy.world 1 points 6 days ago

Ollama is FOSS, SD has a proproprietary but permissive, source-available license, but it is not what most people would associate with "open-source"

load more comments (1 replies)
load more comments
view more: ‹ prev next ›
this post was submitted on 23 Mar 2025
1216 points (100.0% liked)

Technology

67987 readers
3333 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS