804
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
this post was submitted on 21 Aug 2025
804 points (100.0% liked)
Technology
74345 readers
2411 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related news or articles.
- Be excellent to each other!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
- Check for duplicates before posting, duplicates may be removed
- Accounts 7 days and younger will have their posts automatically removed.
Approved Bots
founded 2 years ago
MODERATORS
I don't see how this is "shoved in." Wales identified a situation where Wikipedia's existing non-AI process doesn't work well and then realized that adding AI assistance could improve it.
Adding AI assistance to any review process only ever worsens it, because instead of having to review one thing, now the reviewer has to review two things, one of which is defo hallucinated but it's hard to justify the "why", and the reviewer is also paid far less in exchange and has his entire worker class threatened.
I don't see how this fits into the actual case being discussed here.
The situation currently is that a newbie editor whose article is deleted gets presented with a simple "your article was deleted" message. The proposition is to have an AI flesh that out with a "possibly for the following reasons:" Explanation. How is that worse?
All that stuff about paying less and threatening the worker class is irrelevant. This is Wikipedia, its editors and administrators are all unpaid volunteers.
Neither did Wales. Hence, the next part of the article:
It doesn't mean the original process isn't problematic, or can't be helpfully augmented with some kind of LLM-generated supplement. But this is like a poster child of a troublesome AI implementation: where a general purpose LLM needs understanding of context it isn't presented (but the reader assumes it has), where hallucinations have knock-on effects, and where even the founder/CEO of Wikipedia seemingly missed such errors.
Don't mistake me for being blanket anti-AI, clearly it's a tool Wikipedia can use. But the scope has to be narrow, and the problem specific.