334

Website operators are being asked to feed LLM crawlers poisoned data by a project called Poison Fountain.

The project page links to URLs which provide a practically endless stream of poisoned training data. They have determined that this approach is very effective at ultimately sabotaging the quality and accuracy of AI which has been trained on it.

Small quantities of poisoned training data can significantly damage a language model.

The page also gives suggestions on how to put the provided resources to use.

you are viewing a single comment's thread
view the rest of the comments
[-] Disillusionist@piefed.world 36 points 2 months ago

AI companies could start, I don't know- maybe asking for permission to scrape a website's data for training? Or maybe try behaving more ethically in general? Perhaps then they might not risk people poisoning the data that they clearly didn't agree to being used for training?

[-] Lembot_0006@programming.dev 7 points 2 months ago

Why should they ask permission to read freely provided data? Nobody's asking for any permission, but LLM trainers somehow should? And what do you want from them from an ethical standpoint?

[-] GunnarGrop@lemmy.ml 17 points 2 months ago

Much of it might be freely available data, but there's a huge difference between you accessing a website for data and an LLM doing the same thing. We've had bots scraping websites since the 90's, it's not a new thing. And since scraping bots have existed we've developed a standard on the web to deal with it, called "robots.txt". A text file telling bots what they are allowed to do on websites and how they should behave.

LLM's are notorious for disrespecting this, leading to situations where small companies and organisations will have their websites scraped so thoroughly and frequently that they can't even stay online anymore, as well as skyrocketing their operational costs. In the last few years we've had to develop ways just to protect ourselves against this. See the "Anubis" project.

Hence, it's much more important that LLM's follow the rules than you and me doing so on an individual level.

It's the difference between you killing a couple of bees in your home versus an industry specialising in exterminating bees at scale. The efficiency is a big factor.

[-] Disillusionist@piefed.world 12 points 2 months ago

Is the only imaginable system for AI to exist one in which every website operator, or musician, artist, writer, etc has no say in how their data is used? Is it possible to have a more consensual arrangement?

As far as the question about ethics, there is a lot of ground to cover on that. A lot of it is being discussed. I'll basically reiterate what I said that pertains to data rights. I believe they are pretty fundamental to human rights, for a lot of reasons. AI is killing open source, and claiming the whole of human experience for its own training purposes. I find that unethical.

[-] Lembot_0006@programming.dev 2 points 2 months ago
[-] Disillusionist@piefed.world 8 points 2 months ago* (last edited 2 months ago)
[-] Lembot_0006@programming.dev 2 points 2 months ago

The guy is talking about consulting as I understand. Yes, LLM is great for reading the documentation. That's the purpose of LLM. Now people can use those libraries without spending ages reading through docs. That's progress. I see it as a way to write more open source because it became simpler and less tedious.

[-] Disillusionist@piefed.world 9 points 2 months ago

He's jumping ship because it's destroying his ability to eke out a living. The problem isn't a small one, what's happening to him isn't a limited case.

[-] Lembot_0006@programming.dev 2 points 2 months ago

So? Is he more important than those specialists who now can write code without hiring a consultant?

[-] ExLisper@lemmy.curiana.net 6 points 2 months ago

Yes, they should because they generate way more traffic. Why do you think people are trying to protect websites from AI crawlers? Because they want to keep public data secret?

Also, everyone knows AI companies used copyrighted materials and private data without permission. If you think they only used public data you're uninformed or lying on their behalf.

[-] Lembot_0006@programming.dev 4 points 2 months ago

I personally consider the current copyright laws completely messed up, so I see no problem in using any data technically available for processing.

[-] ExLisper@lemmy.curiana.net 8 points 2 months ago

Ok, so you think it's ok for big companies to break the laws you don't like, cool. I'm sure those big companies will not sue you when you infringe on some of their laws you don't like.

And I like the way you just ignored the two other issues I mentioned. Are you fine with AI bots slowing sites like Codeberg to a crawl? Are you fine with AI companies using personal data without consent?

[-] Lembot_0006@programming.dev 3 points 2 months ago

I'm fine with companies using any freely available data.

[-] ExLisper@lemmy.curiana.net 6 points 2 months ago

I'm also fine with them using data they can get for free like, I don't know, weather data they collect themselves?

Data hosted by private individuals and open source projects is not free. Someone has to pay for hosting and AI companies sucking data with army of bots is elevating the cost of hosting beyond the means of those people/projects. They are shifting the costs of providing the "free" data on the community while keeping all the profits.

Private data used without consent is also not free. It's valuable, protected data and AI companies are simply stealing it. Do you consider stolen things free?

I see your attitude is "they don't hurt me personally and I don't care what they do to other people". It's either ignorant or straight antisocial. Also a bit bootlickish.

[-] Lembot_0006@programming.dev 2 points 2 months ago

Data is available therefore it is... well, available. You don't want to pay to host it? Don't then. LLM companies don't hack your servers. They read only the data that you have provided volunterely.

[-] ExLisper@lemmy.curiana.net 3 points 2 months ago

Still ignorant, antisocial and a little big bootlickish.

[-] BaroqueInMind@piefed.social 5 points 2 months ago

As someone who self-hosts a LLM and trains it on web data regularly to improve my model, I get where your frustration is coming from.

But engaging in discourse here where people already have a heavy bias against machine-learning language models is a fruitless effort. No one here is going to provide you catharsis with a genuine conversation that isnt rhetoric.

Just put the keyboard down and walk away.

[-] Rekall_Incorporated@piefed.social 7 points 2 months ago

I don't have a bias against LLMs, I use them regularly albeit either for casual things (movie recommendation) or an automation tool in work areas where I can somewhat easily validate the output or the specific task is low impact.

I am just curious, do you respect robots.txt?

[-] FaceDeer@fedia.io 4 points 2 months ago

I think it's worthwhile to show people that views outside of their like-minded bubble exist. One of the nice things about the Fediverse over Reddit is that the upvote and downvote tallies are both shown, so we can see that opinions are not a monolith.

Also, engaging in Internet debate is never to convince the person you're actually talking to. That almost never happens. The point of debate is to present convincing arguments for the less-committed casual readers who are lurking rather than participating directly.

[-] Disillusionist@piefed.world 2 points 2 months ago

I agree with you that there can be value in "showing people that views outside of their likeminded bubble[s] exist". And you can't change everyone's mind, but I think it's a bit cynical to assume you can't change anyone's mind.

[-] Disillusionist@piefed.world 2 points 2 months ago

I can't speak for everyone, but I'm absolutely glad to have good-faith discussions about these things. People have different points of view, and I certainly don't know everything. It's one of the reasons I post, for discussion. It's really unproductive to make blanket statements that try to end discussion before it starts.

[-] DSTGU@sopuli.xyz 3 points 2 months ago* (last edited 2 months ago)

For the same reason copyright and licences exist. You may be able to interact with something - because that's what the license allows you - but still not be able to use it. Companies have faced million dollar fines for using code not subscribed to a license which allows them to do that. You may face trial if you distribute content (e.g. movies or music) you are only allowed to watch. The key here is that unless you are explicitly permitted to use something further it is considered illegal and punishable. Why would it be any different for AI training?

this post was submitted on 13 Jan 2026
334 points (100.0% liked)

Technology

83566 readers
2584 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS