Imagine how much power is wasted on this unfortunate necessity.
Now imagine how much power will be wasted circumventing it.
Fucking clown world we live in
Imagine how much power is wasted on this unfortunate necessity.
Now imagine how much power will be wasted circumventing it.
Fucking clown world we live in
I have no idea why the makers of LLM crawlers think it's a good idea to ignore bot rules. The rules are there for a reason and the reasons are often more complex than "well, we just don't want you to do that". They're usually more like "why would you even do that?"
Ultimately you have to trust what the site owners say. The reason why, say, your favourite search engine returns the relevant Wikipedia pages and not bazillion random old page revisions from ages ago is that Wikipedia said "please crawl the most recent versions using canonical page names, and do not follow the links to the technical pages (including history)". Again: Why would anyone index those?
They want everything, does it exist, but it's not in their dataset? Then they want it.
They want their ai to answer any question you could possibly ask it. Filtering out what is and isn't useful doesn't achieve that
That's just BattleBots with a different name.
You're not wrong.
Ok, I now need a screensaver that I can tie to a cloudflare instance that visualizes the generated "maze" and a bot's attempts to get out.
Surprised at the level of negativity here. Having had my sites repeatedly DDOSed offline by Claudebot and others scraping the same damned thing over and over again, thousands of times a second, I welcome any measures to help.
this is some fucking stupid situation, we somewhat got a faster internet and these bots messing each other are hogging the bandwidth.
nothing can be improved while capitalism or authority exist; all improvement will be seized and used to oppress.
Especially since the solution I cooked up for my site works just fine and took a lot less work. This is simply to identify the incoming requests from these damn bots -- which is not difficult, since they ignore all directives and sanity and try to slam your site with like 200+ requests per second, that makes 'em easy to spot -- and simply IP ban them. This is considerably simpler, and doesn't require an entire nuclear plant powered AI to combat the opposition's nuclear plant powered AI.
In fact, anybody who doesn't exhibit a sane crawl rate gets blocked from my site automatically. For a while, most of them were coming from Russian IP address zones for some reason. These days Amazon is the worst offender, I guess their Rufus AI or whatever the fuck it is tries to pester other retail sites to "learn" about products rather than sticking to its own domain.
Fuck 'em. Route those motherfuckers right to /dev/null.
This is getting ridiculous. Can someone please ban AI? Or at least regulate it somehow?
The problem is, how? I can set it up on my own computer using open source models and some of my own code. It’s really rough to regulate that.
I guess this is what the first iteration of the Blackwall looks like.
Gotta say "AI Labyrinth" sounds almost as cool.
So the web is a corporate war zone now and you can choose feudal protection or being attacked from all sides. What a time to be alive.
There is also the corpo verified id route. In order to avoid the onslaught of AI bots and all that comes with them you'll need to sacrifice freedom, anonymity, and privacy like a good little peasant to prove you aren't a bot.. and so will everyone else. You'll likely be forced to deal with whatever AI bots are forced upon you while within the walls but better an enemy you know I guess?
I’m imagining a sci-fi spin on this where AI generators are used to keep AI crawlers in a loop, and they accidentally end up creating some unique AI culture or relationship in the process.
Not exactly how I expected the AI wars to go, but I guess since we're in a cyberpunk world, we take what we get
Next step is an AI that detects AI labyrinth.
It gets trained on labyrinths generated by another AI.
So you have an AI generating labyrinths to train an AI to detect labyrinths which are generated by another AI so that your original AI crawler doesn't get lost.
It's gonna be AI all the way down.
"I used the AI to destroy the AI"
And consumed the power output of a medium country to do it.
Yeah, great job! 👍
We truly are getting dumber as a species. We're facing climate change but running some of the most power hungry processers in the world to spit out cooking recipes and homework answers for millions of people. All to better collect their data to sell products to them that will distract them from the climate disaster our corporations have caused. It's really fun to watch if it wasn't so sad.
So the world is now wasting energy and resources to generate AI content in order to combat AI crawlers, by making them waste more energy and resources. Great! 👍
The energy cost of inference is overstated. Small models, or “sparse” models like Deepseek are not expensive to run. Training is a one-time cost that still pales in comparison to, like, making aluminum.
Doubly so once inference goes more on-device.
Basically, only Altman and his tech bro acolytes want AI to be cost prohibitive so he can have a monopoly. Also, he’s full of shit, and everyone in the industry knows it.
AI as it’s implemented has plenty of enshittification, but the energy cost is kinda a red herring.
And soon, the already AI-flooded net will be filled with so much nonsense that it becomes impossible for anyone to get some real work done. Sigh.
Generating content with AI to throw off crawlers. I dread to think of the resources we’re wasting on this utter insanity now, but hey who the fuck cares as long as the line keeps going up for these leeches.
Considering how many false positives Cloudflare serves I see nothing but misery coming from this.
In terms of Lemmy instances, if your instance is behind cloudflare and you turn on AI protection, federation breaks. So their tools are not very helpful for fighting the AI scraping.
You have Thirteen hours in which to solve the labyrinth before your baby AI becomes one of us, forever.
So we're burning fossil fuels and destroying the planet so bots can try to deceive one another on the Internet in pursuit of our personal data. I feel like dystopian cyberpunk predictions didn't fully understand how fucking stupid we are...
Will this further fuck up the inaccurate nature of AI results? While I'm rooting against shitty AI usage, the general population is still trusting it and making results worse will, most likely, make people believe even more wrong stuff.
The article says it's not poisoning the AI data, only providing valid facts. The scraper still gets content, just not the content it was aiming for.
E:
It is important to us that we don’t generate inaccurate content that contributes to the spread of misinformation on the Internet, so the content we generate is real and related to scientific facts, just not relevant or proprietary to the site being crawled.
So they rewrote Nepenthes (or Iocaine, Spigot, Django-llm-poison, Quixotic, Konterfai, Caddy-defender, plus inevitably some Rust versions)
Edit, but with ✨AI✨ and apparently only true facts
This is a most excellent place for technology news and articles.