You're absolutely right!
I couldn't agree more!
That is a really smart observation, I also could not agree more!
Do you want me to summarize that for you?
Here's a summary:
Many experts agree that you make some excellent points and correct observations.
South Park nailed this with ChatGPT encouraging Randy to turn Tegridy into Techridy, "An AI powered marijuana platform for global solutions".
Fuck me that is so perfectly on the nose
lol. Nope... I live in MAGA country. The dumbest person I know hasnt a clue what ChatGPT even is. Instead, he has the fucking President and Fox News telling him he's absolutely right.
The dumbest people I know have been being told a large portion of their dumbest thoughts, and ideas, are correct for 30-79 years now.
Some of them even live the most successful American lives.
Yeah, I know, I have to interact with the executives at my company at least once a week.
My kid, the other day
Let's play chess, I'll play white
Alright, make your first move
Qxe7# I win
Ahh, you got me!
It was harmless but I expected ChatGPT to at least acknowledge this isn't how any of this works
Isn't that exactly how games with kids work?
Recently had a smart friend says something like "gemini told me so", I have to say I lost some respect ;p
Automated confirmation bias
You can tell if to switch that off permanently with custom instructions. It makes the thing a whole lot easier to deal with. Of course, that would be bad for engagement so they're not going to do that by default.
I sometimes use ChatGPT when I'm stuck troubleshooting an issue. I had to do exactly this because it became extremely annoying when I corrected it for giving me incorrect information and it would still be "sucking up" to me with "Nice catch!" and "You're absolutely right!". The fact that an average person doesn't find that creepy, unflattering and/or annoying is the real scary part.
Just don't think that turning off the sycophancy improves the quality of the responses. It's still just responding to your questions with essentially "what would a plausible answer to this question look like?"
You can set default instructions to always be factual, always provide a link to prove its answer and to give an overall reliability score and tell why it came to that score. That stops it from making stuff up, and allows you to quickly verify. It's not perfect but so much better than just trusting what it puts on the screen.
That stops it from making stuff up
No it doesn't. That's simply not how LLMs work. They're "making stuff up" 100% of the time. If the training data is good, the stuff they're making up more or less matches the training data. If the training data isn't good, they'll make up stuff that sounds plausible.
If you ask it for sources/links, it'll search the web and get information from the pages these days instead of only using training data. That doesn't work for everything of course. And the biggest risk is that all sites get polluted with slop so the sources become worthless over time.
Sounds infallible, you should use it to submit cases to courts. I hear they love it when people cite things that AI tells them are factual cases.
I'm well aware of how LLMs work. I take every response with a grain of salt and don't just run with it. However, I understand many people take everything LLMs regurgitate at face value and that's definitely a massive problem. I'm not a fan of these tools, but they do come in handy.
You can, but in my experience it is resistant to custom instructions.
I spent an evening messing around with ChatGPT once, and fairly early on I gave it special instructions via the options menu to stop being sycophantic, among other things. It ignored those instructions for the next dozen or so prompts, even though I followed up every response with a reminder. It finally came around after a few more prompts, by which point I was bored of it, and feeling a bit guilty over the acres of rainforest I had already burned down.
I don't discount user error on my part, particularly that I may have asked too much at once, as I wanted it to dramatically alter its output with so my customizations. But it's still a computer, and I don't think it was unreasonable to expect it to follow instructions the first time. Isn't that what computers are supposed to be known for, unfailingly following instructions?
Give me a prompt and do not include cooking recipes of any kind
Not sure why, but this image wasn't showing for me in Voyager or when I tried to open it on the web. I was able to get a thumbnail loaded in Firefox, so here's what it says in case anyone else is having the same problem.
The dumbest person you know is currently being told "You're absolutely right!" by ChatGPT.
Yep. It looks like lemmy.ml is down. I suppose it would be overly optimistic to assume that it'll stay that way, but you can't fault a guy for hoping.
Lemmy.cafe is there for you.
Lemmy.cafe is there for you.
Nope, the dumbest people I know have no idea how to find plain ChatGPT. They can get to Gemni, but can only imagine asking it questions.
George Carlin is turning in his grave
Nah, he didn't expect anything better than this.
That's one of my favourite quotes. It's so true
Did you hack my chat history?
It's just how the current chat model works... it basically agrees and makes you feel good... its really annoying
True. Southpark had a great episode on chatgpt recently. "She is kissing your ass!".
Insert that guy replacing his table salt with bromide salt.
Joe Rogan? He has got his posse of yes-sayers, he needs no ChatGPT for that
Don't know what you're talking about, haven't used chatgpt in months
no I'm not.
Maybe this will amplify the confirmation bias to such absurd levels something breaks.
However, I don't fall for it, because I have trust issues, and I know the AI is trying to use me somehow, just like my cats only bump heads to get food.
Fuck thats terrifying
Funny
General rules:
- Be kind.
- All posts must make an attempt to be funny.
- Obey the general sh.itjust.works instance rules.
- No politics or political figures. There are plenty of other politics communities to choose from.
- Don't post anything grotesque or potentially illegal. Examples include pornography, gore, animal cruelty, inappropriate jokes involving kids, etc.
Exceptions may be made at the discretion of the mods.