686
submitted 1 week ago* (last edited 6 days ago) by Pro@programming.dev to c/Technology@programming.dev

Comments

Source.

you are viewing a single comment's thread
view the rest of the comments
[-] Natanael@infosec.pub 5 points 6 days ago

If they had the slightest bit of survival instinct they'd share a archive.org / Google-ish scraper and web cache infrastructure, and pull from those caches, and everything would just be scraped once, repeated only occasionally.

Instead they're building maximally dumb (as in literally counterproductive and self harming) scrapers who don't know what they're interacting with.

At what point will people start to track down and sabotage AI datacenters IRL?

this post was submitted on 17 Aug 2025
686 points (100.0% liked)

Technology

431 readers
533 users here now

Share interesting Technology news and links.

Rules:

  1. No paywalled sites at all.
  2. News articles has to be recent, not older than 2 weeks (14 days).
  3. No videos.
  4. Post only direct links.

To encourage more original sources and keep this space commercial free as much as I could, the following websites are Blacklisted:

More sites will be added to the blacklist as needed.

Encouraged:

founded 3 months ago
MODERATORS