36
submitted 1 year ago by GnuLinuxDude@lemmy.ml to c/meta@lemmy.ml

Some context about this here: https://arstechnica.com/information-technology/2023/08/openai-details-how-to-keep-chatgpt-from-gobbling-up-website-data/

the robots.txt would be updated with this entry

User-agent: GPTBot
Disallow: /

Obviously this is meaningless against non-openai scrapers or anyone who just doesn't give a shit.

you are viewing a single comment's thread
view the rest of the comments
[-] Hubi@feddit.de 3 points 1 year ago

Wouldn't they theoretically be able to set up their own instance, federate with all the larger ones and scrape the data this way? Not sure if blocking them via the robots.txt file is the most effective barrier in case that they really want the data.

[-] dreadedsemi@lemmy.world 12 points 1 year ago* (last edited 1 year ago)

Robots.txt is more of an honor system. If they respect , they won't do that trick.

[-] NightAuthor@beehaw.org 5 points 1 year ago

Robots.txt is just a notice anyways. Your scraper could just ignore it, no workaround necessary.

this post was submitted on 20 Aug 2023
36 points (100.0% liked)

lemmy.ml meta

1406 readers
1 users here now

Anything about the lemmy.ml instance and its moderation.

For discussion about the Lemmy software project, go to !lemmy@lemmy.ml.

founded 3 years ago
MODERATORS