494
Google’s Search AI Says Slavery Was Good, Actually
(futurism.com)
This is a most excellent place for technology news and articles.
There needs to be like an information campaign or something... The average person doesn't realize these things say what they think you want to hear, and they are buying into hype and think these things are magic knowledge machines that can tell you secrets you never imagined.
I mean, I get the people working on the LLMs want them to be magic knowledge machines, but it is really putting the cart before the horse to let people assume they already are, and the little warnings that some stuff at the bottom of the page are inadequate.
I had a friend who read to me this beautiful thing ChatGPT wrote about an idyllic world. The prompt had been something like, “write about a world where all power structures are reversed.”
And while some of the stuff in there made sense, not all of it did. Like, “in schools, students are in charge and give lessons to the teachers” or something like that.
But she was acting like ChatGPT was this wise thing that had delivered a beautiful way for society to work.
I had to explain that, no, ChatGPT gave the person who made the thing she shared what they asked for. It’s not a commentary on the value of that answer at all, it’s merely the answer. If you had asked ChatGPT to write about a world where all power structures were double what they are now, it would give you that.
I mean, on the ChatGPT site there's literally a disclaimer along the bottom saying it's able to say things that aren't true...
You seem to have missed the bottom-line disclaimer of the person you're replying to, which is an excellent case-in-point for how ineffective they are.
Unfortunately, people are stupid and don’t pay attention to disclaimers.
And, I might be wrong, but didn’t they only add those in recently after folks started complaining and it started making the news?
I feel like I remember them being there since January of this year, which is when I started playing with ChatGPT, but I could be mistaken.