37
submitted 1 year ago by usernotfound@lemmy.ml to c/lemmy@lemmy.ml

(attempt to cross-post from /c/programming )

Idea: Scrape all the posts from a subreddit as they're being made, and "archive" them on a lemmy instance, making it very clear it's being rehosted, and linking back to the original. It would probably have to be a "closed" lemmy instance specifically for this purpose. The tool would run for multiple subreddits, allowing Lemmy users to still be updated about and discuss any potential content that gets left behind.

Thoughts? It's probably iffy copyright-wise, but I think I can square my conscience with it.

you are viewing a single comment's thread
view the rest of the comments
[-] Barbarian@sh.itjust.works 7 points 1 year ago

Just be aware that it might not work. Reddit implemented rate limits on page loads to combat the inevitable web scraping as they turn off the API. Test out how fast you can pull pages before putting in any real coding time.

[-] borari@sh.itjust.works 5 points 1 year ago

Reddit implemented rate limits on page loads to combat the inevitable web scraping

This whole time I was wondering how the API changes made any sense when anyone disgruntled about it could just turn to scraping, putting drastically more load on Reddit's infrastructure. It makes me feel a bit better that they aren't that clueless.

this post was submitted on 08 Jun 2023
37 points (100.0% liked)

Lemmy

12544 readers
74 users here now

Everything about Lemmy; bugs, gripes, praises, and advocacy.

For discussion about the lemmy.ml instance, go to !meta@lemmy.ml.

founded 4 years ago
MODERATORS