686
submitted 6 days ago* (last edited 5 days ago) by Pro@programming.dev to c/Technology@programming.dev

Comments

Source.

top 50 comments
sorted by: hot top controversial new old
[-] Probius@sopuli.xyz 216 points 6 days ago

This type of large-scale crawling should be considered a DDoS and the people behind it should be charged with cyber crimes and sent to prison.

[-] eah@programming.dev 19 points 5 days ago

Applying the Computer Fraud and Abuse Act to corporations? Sign me up! Hey, they're also people, aren't they?

[-] caseyweederman@lemmy.ca 9 points 5 days ago

Put the entire datacenter buildings into prison

load more comments (1 replies)
[-] isolatedscotch@discuss.tchncs.de 28 points 6 days ago

good luck with that! not only is a company doing it, which means no individual person will go to prison, but it's from a chinese company with no regard for any laws that might get passed

[-] humanspiral@lemmy.ca 15 points 6 days ago

The people determining US legislation have said, "how can we achieve skynet if our tech trillionaire company sponsors can't evade copyright or content licensing?" But they also say if "we don't spend every penny you have on achieving US controlled Skynet, then China wins."

Speculating on "Huawei network can solve this", doesn't mean that all the bots are Chinese, but does confirm that China has a lot of AI research, and Huawei GPUs/NPUs are getting used, and successfully solving this particular "I am not a robot challenge".

It's really hard to call "amateur coding challenge" competition web site a national security threat, but if you hype Huawei enough, then surely the US will give up on AI like it gave up on solar, and maybe EVs. "If we don't adopt Luddite politics and all become Amish, then China wins" is a "promising" new loser perspective on media manipulation.

[-] folken@lemmy.world 38 points 5 days ago* (last edited 5 days ago)

When you realize that you live in a cyberpunk novel. The AI is cracking the ICE. https://cyberpunk.fandom.com/wiki/Black_ICE

[-] Regrettable_incident@lemmy.world 15 points 5 days ago

I love seeing how much influence William Gibson had on cyberpunk.

[-] ThePyroPython@lemmy.world 16 points 5 days ago

It's not intentional but the chap ended up writing works that defined both the Cyberpunk (Neuromancer) and Steampunk (The Difference Engine) genres.

Can't deny that influence.

load more comments (1 replies)
[-] Gullible@sh.itjust.works 103 points 6 days ago

I really feel like scrapers should have been outlawed or actioned at some point.

[-] floofloof@lemmy.ca 81 points 6 days ago

But they bring profits to tech billionaires. No action will be taken.

[-] BodilessGaze@sh.itjust.works 13 points 6 days ago

No, the reason no action will be taken is because Huawei is a Chinese company. I work for a major US company that's dealing with the same problem, and the problematic scrapers are usually from China. US companies like OpenAI rarely cause serious problems because they know we can sue them if they do. There's nothing we can do legally about Chinese scrapers.

load more comments (2 replies)
[-] programmer_belch@lemmy.dbzer0.com 39 points 6 days ago

I use a tool that downloads a website to check for new chapters of series every day, then creates an RSS feed with the contents. Would this be considered a harmful scraper?

The problem with AI scrapers and bots is their scale, thousands of requests to webpages that the internal server cannot handle, resulting in slow traffic.

[-] S7rauss@discuss.tchncs.de 31 points 6 days ago

Does your tool respect the site’s robots.txt?

[-] who@feddit.org 18 points 6 days ago* (last edited 6 days ago)

Unfortunately, robots.txt cannot express rate limits, so it would be an overly blunt instrument for things like GP describes. HTTP 429 would be a better fit.

[-] Redjard@lemmy.dbzer0.com 9 points 6 days ago

Crawl-delay is just that, a simple directive to add to robots.txt to set the maximum crawl frequency. It used to be widely followed by all but the worst crawlers ...

load more comments (1 replies)
load more comments (1 replies)
load more comments (1 replies)
load more comments (2 replies)
[-] gressen@lemmy.zip 80 points 6 days ago

Write TOS that state that crawlers automatically accept a service fee and then send invoices to every crawler owner.

[-] BodilessGaze@sh.itjust.works 41 points 6 days ago

Huawei is Chinese. There's literally zero chance a European company like Codeberg is going to successfully collect from a company in China over a TOS violation.

load more comments (3 replies)
[-] sp3ctr4l@lemmy.dbzer0.com 25 points 5 days ago

Do we all want the fucking Blackwall from Cyberpunk 2077?

Fucking NetWatch?

Because this is how we end up with them.

....excuse me, I need to go buy a digital pack of cigarettes for the angry voice in my head.

[-] 0_o7@lemmy.dbzer0.com 35 points 6 days ago

I blocked almost all big players in hosting, China, Ruasia, Vietnam and now they're now bombarding my site with residential IP address from all over the world. They must be using compromised smart home devices or phones with malware.

Soon everything on the internet will be behind a wall.

[-] irelephant@programming.dev 12 points 6 days ago

This isn't sustainable for the ai companies, when the bubble pops it will stop.

[-] aev_software@programming.dev 20 points 5 days ago

In the mean time, sites are getting DDOS-ed by scrapers. One way to stop your site from getting scraped is having it be inaccessible... which is what the scalpers are causing.

Normally I would assume DDOS-ing is performed in order to take a site offline. But ai-scalpers require the opposite. They need their targets online and willing. One would think they'd be a bit more careful about the damage they cause.

But they aren't, because capitalism.

load more comments (1 replies)
load more comments (2 replies)
[-] Blackmist@feddit.uk 25 points 5 days ago

Business idea: AWS, but hosted entirely within the computing power of AI web crawlers.

[-] Kissaki@feddit.org 9 points 5 days ago

Reminds me of the "store data inside slow network requests for the in-transit duration". It was a fun article to read.

load more comments (5 replies)
[-] filcuk@lemmy.zip 1 points 3 days ago

Just reading the Ender's game series, very on-point.

Tap for spoilerWe'll wake up one day only to realise an entity living 'in the wires' is the only thing keeping the internet alive.

[-] Blackmist@feddit.uk 1 points 3 days ago

As long as NetWatch keeps them behind the Blackwall, we're all good.

load more comments (2 replies)
[-] cecilkorik@lemmy.ca 64 points 6 days ago

Begun, the information wars have.

[-] steal_your_face@lemmy.ml 10 points 6 days ago

The wars have been fought and lost a while ago tbh

[-] tal@lemmy.today 20 points 5 days ago

If someone just wants to download code from Codeberg for training, it seems like it'd be way more efficient to just clone the git repositories or even just download tarballs of the most-recent releases for software hosted on Codeberg than to even touch the Web UI at all.

I mean, maybe you need the Web UI to get a list of git repos, but I'd think that that'd be about it.

[-] witten@lemmy.world 26 points 5 days ago

Then they'd have to bother understanding the content and downloading it as appropriate. And you'd think if anyone could understand and parse websites in realtime to make download decisions, it be giant AI companies. But ironically they're only interested in hoovering up everything as plain web pages to feed into their raw training data.

[-] Natanael@infosec.pub 16 points 5 days ago

The same morons scrape Wikipedia instead of downloading the archive files which trivially can be rendered as web pages locally

[-] cadekat@pawb.social 39 points 6 days ago

Huh, why does Anubis use SHA256? It's been optimized to all hell and back.

Ah, they're looking into it: https://github.com/TecharoHQ/anubis/issues/94

[-] chicken@lemmy.dbzer0.com 32 points 6 days ago

Seems like such a massive waste of bandwidth since it's the same work being repeated by many different actors to piece together the same dataset bit by bit.

[-] MonkderVierte@lemmy.zip 18 points 5 days ago* (last edited 5 days ago)

I just thought that having a client side proof-of-work (or even only a delay) bound to the IP might deter the AI companies to choose to behave instead (because single-visit-per-IP crawlers get too expensive/slow and you can just block normal abusive crawlers). But they already have mind-blowing computing and money ressources and only want your data.

But if there was a simple-to-use integrated solution and every single webpage used this approach?

[-] witten@lemmy.world 12 points 5 days ago

Believe me, these AI corporations have way too many IPs to make this feasible. I've tried per-IP rate limiting. It doesn't work on these crawlers.

load more comments (11 replies)
[-] rozodru@lemmy.world 15 points 5 days ago

I run my own gitea instance on my own server and within the past week or so I've noticed it just getting absolutely nailed. One repo in particular, a Wayland WM I built. Just keeps getting hammered over and over by IPs in China.

[-] ZILtoid1991@lemmy.world 11 points 5 days ago

Just keeps getting hammered over and over by IPs in China.

Simple solution: Block Chinese IPs!

[-] witten@lemmy.world 6 points 5 days ago

Are you using Anubis?

load more comments (2 replies)
[-] metacolon 12 points 5 days ago

Are those blocklists publicly available somewhere?

[-] Taldan@lemmy.world 11 points 5 days ago

I would hope not. Kinda pointless if they become public

[-] daniskarma@lemmy.dbzer0.com 28 points 5 days ago

On the contrary. Open community based block lists can be very effective. Everyone can contribute to them and asphyxiate people with malicious intents.

If you think something like, "if the blocklist is available then malicious agents simply won't use that ips" I don't think if that makes a lot of sense. As the malicious agent will know any of their IPs being blocked as soon as they use them.

[-] pedz@lemmy.ca 9 points 5 days ago

Just to give an example of public lists that are working, I have an IRC server and it's getting bombarded with spam bots. It's horrible around the superbowl for some reason, but it just continues year round.

So I added a few public anti spamming lists like dronebl to the config, and the vast majority of the bots are automatically G-Lined/banned.

load more comments (4 replies)
[-] ryanvade@lemmy.world 23 points 6 days ago

It's being investigated at least, hopefully a solution can be found. This will probably end up in a constantly escalating battle with the AI companies. https://github.com/TecharoHQ/anubis/issues/978

[-] LiveLM@lemmy.zip 25 points 6 days ago

Uuughhh I knew it'd always be a mouse and cat game, sincerely hope the Anubis devs figure out how to fuck up the AI crawlers again

load more comments
view more: next ›
this post was submitted on 17 Aug 2025
686 points (100.0% liked)

Technology

426 readers
177 users here now

Share interesting Technology news and links.

Rules:

  1. No paywalled sites at all.
  2. News articles has to be recent, not older than 2 weeks (14 days).
  3. No videos.
  4. Post only direct links.

To encourage more original sources and keep this space commercial free as much as I could, the following websites are Blacklisted:

More sites will be added to the blacklist as needed.

Encouraged:

founded 3 months ago
MODERATORS