516
submitted 1 year ago by L4s@lemmy.world to c/technology@lemmy.world

BBC will block ChatGPT AI from scraping its content::ChatGPT will be blocked by the BBC from scraping content in a move to protect copyrighted material.

you are viewing a single comment's thread
view the rest of the comments
[-] vidarh@lemmy.stad.social 3 points 1 year ago* (last edited 1 year ago)

It won’t really matter, because there will continue to be other sources.

Taken to an extreme, there are indications OpenAI’s market cap is already higher than Tomson Reuters ($80bn-$90bn vs <$60bn), and it will go far higher. Getty, also mentioned, has a market cap of “only” $2.4bn. In other words: If enough important sources of content starts blocking OpenAI, they will start buying access, up to and including if necessary buying original content creators.

As it is, while BBC is clearly not, some of these other content providers are just playing hard to get and hoping for a big enough cash offer either for a license or to get bought out.

The cat is out of the bag, whatever people think about it, and sources that block themselves off from AI entirely (to the point of being unwilling to sell licenses or sell themselves) will just lose influence accordingly.

This also presumes OpenAI remains the only contender, which is clearly not the case in the long run given the rise of alternative models that while mostly still not good enough, are good enough that it’s equally clearly just a matter of time before anyone (at least, for the time being, for sufficiently rich instances of “anyone”, with the cost threshold dropping rapidly) can fine-tune their own models using their own scraped data.

In other words, it may make them feel better, but in the long run it’s a meaningless move.

EDIT: What a weird thing to downvote without replying to. I've taken no stance on whether BBC's decision is morally right or not, just addressed that it's unlikely to have any effect, and you can dislike that it won't have any effect but thinking it will is naive.

[-] realharo@lemm.ee 2 points 1 year ago* (last edited 1 year ago)

It won’t really matter, because there will continue to be other sources.

Other sources that will likely also block the scrapers.

It doesn't matter if only BBC does it. It matters if everyone does it.

What incentive do the news sites have to want to be scraped? With Google, they at least get search traffic. OpenAI offers them absolutely nothing.

[-] vidarh@lemmy.stad.social 1 points 1 year ago

Other sources that are public domain or "cheap enough" for OpenAI to simply buy them. Hence my point that OpenAI is already worth enough that they could make a takeover offer for Reuters.

[-] utopiah@lemmy.world 2 points 1 year ago

If only the BBC does it then sure, it's pointless. If the BBC does it and you and I consider it, it might change things a bit. If we do and others do, including large websites, or author guilds starting legal actions in the US, then it does change things radically to the point of rendering OpenAI LLMs basically useless or practically unusable. IMHO this isn't an action against LLMs in general, not e.g against researchers from public institutions building datasets and publishing research results, but rather against OpenAI the for-profit company that has exclusive right with the for-profit behemoth Microsoft which a champion of entrenchment.

[-] vidarh@lemmy.stad.social 1 points 1 year ago

The thing, is realistically it won't make a difference at all, because there are vast amounts of public domain data that remain untapped, so the main "problematic" need for OpenAI is new content that represents up to data language and up to date facts, and my point with the share price of Thomson Reuters is to illustrate that OpenAI is already getting large enough that they can afford to outright buy some of the largest channels of up-to-the-minute content in the world.

As for authors, it might wipe a few works by a few famous authors from the dataset, but they contribute very little to the quality of an LLM, because the LLM can't easily judge during training unless you intentionally reinforce specific works. There are several million books published every year. Most of them make <$100 in royalties for their authors (an average book sell ~200 copies). Want to bet how cheap it'd be to buy a fully licensed set of a few million books? You don't need bestsellers, you need many books that are merely sufficiently good to drag the overall quality of the total dataset up.

The irony is that the largest benefactor of content sources taking a strict view of LLMs will be OpenAI, Google, Meta, and the few others large enough to basically buy datasets or buy companies that own datasets because this creates a moat for those who can't afford to obtain licensed datasets.

The biggest problem won't be for OpenAI, but for people trying to build open models on the cheap.

this post was submitted on 08 Oct 2023
516 points (100.0% liked)

Technology

60009 readers
1941 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 2 years ago
MODERATORS