If you paste plaintext passwords into ChatGPT, the problem is not ChatGPT; the problem is you.
Well tbf chatGPT also shouldn't remember and then leak those passwords lol.
Did you read the article? It didn't. Someone received someone else's chat history appended to one of their own chats. No prompting, just appeared overnight.
Well, that's even worse.
........ That shouldnt be happening, regardless of chat content
Well, yeah, but the point is, ChatGPT didn't "remember and then leak" anything, the web service exposed people's chat history.
How ? How it should be implemented? It's just a llm. It has no true intelligence.
If it's not trained on user data it cannot leak it
ChatGPT doesn't leak passwords. Chat history is leaking which one of those happens to contain a plain text password. What's up with the current trend of saying AI did this and that while the AI really didn't?
People are far too willing to believe AI can do anything. How would the AI even have the passwords.
gots to get dem clicks
Fear mongering. Remember all the people raging and freaking out about Disney's "AI generated background actors"? Just plain bad CG.
FUD for clicks
AT headlines aren’t usually so click bait-ey, but capitalism grows like weeds. Every last news article, we’ve GOT to all ask, who does this serve? Who paid for this irresponsible headline to be run? Whose income is it meant to harm?
Every newsroom boss, like every judge, needs to pay for healthcare (at best, or at worst, or whatever will give them access to some billionaire’s climate survival bunker.) This IS late stage surveillance capitalism. Every decision now is based on that.
That's funny, all I see is ********
you can go hunter2 my hunter2-ing hunter2.
haha, does that look funny to you?
I put on my robe and wizard hat.
Back in the RuneScape days people would do dumb password scams. My buddy was introducing me to the game. We were sitting in his parents garage and he was playing and showing me his high lvl guy. Anyway, he walks around the trading area and someone says something like “omg you can’t type your password backwards *****”. In total disbelief he tries it out. Instantly freaks out, logs out to reset his password, and fails due to to the password already being changed
That's golden. With all my hatred towards scammers, there's a little niche for scams that make people feel smart before undressing them that I can't bring myself to judge.
So what actually happened seems to be this.
- a user was exposed to another users conversation.
thats a big ooof and really shouldn’t happen
- the conversations that where exposed contained sensitive userinformation
unresponsible user error, everyone and their mom should know better by now
Why is it that whenever a corporation loses or otherwise leaks sensitive user data that was their responsibility to keep private, all of Lemmy comes out to comment about how it's the users who are idiots?
Except it's never just about that. Every comment has to make it known that they would never allow that to happen to them because they're super smart. It's honestly one of the most self-righteous, tone deaf takes I see on here.
I don't support calling people idiots, but here's that: we can't control whether corporations leak our data or not, but we can control whether we share our password with ChatGPT or not.
Because that's what the last several reported "breaches" have been. There's been a lot of accounts that were compromised by an unrelated breach, but the users re-used the passwords for multiple accounts.
In this case, ChatGPT clearly tells you not to give it any sensitive information, so giving it sensitive information is on the user.
They weren't there when I used ChatGPT just last night (I'm a pretty heavy user). No queries were made—they just appeared in my history, and most certainly aren't from me (and I don't think they're from the same user either).
This sounds more like a huge fuckup with the site, not the AI itself.
Edit: A depressing amount of people commenting here obviously didn't read the article...
Edit: A depressing amount of people commenting here obviously didn't read the article...
Every time
It also literally says to not input sensitive data...
This is one of the first things I flagged regarding LLMs, and later on they added the warning. But if people don't care and are still gonna feed the machine everything regardless, then that's a human problem.
Hello can you help me, my password is such and such and I can't seem to login.
People literally do this though. I work in IT and people have literally said, out loud, with people around that can hear what we're saying clearly, this exact thing.
I'm like.... I don't want your password. I never want your password. I barely know what my password is. I use a password manager.
IT should never need your password. Your boss and work shouldn't need it. I can log in as you without it most of the time. I don't, because I couldn't give any less of a fuck what the hell you're doing, but I can if I need to....
If your IT person knows what they're doing, most of the time for routine stuff, you shouldn't really see them working, things just get fixed.
Gah.
And Google is bringing AI to private text messages. It will read all of your previous messages. On iOS? Better hope nothing important was said to anyone with an Android phone (not that I trust Apple either).
The implications are terrifying. Nudes, private conversations, passwords, identifying information like your home address, etc. There's a lot of scary scenarios. I also predict that Bard becomes closet racist real fast.
We need strict data privacy laws with teeth. Otherwise corporations will just keep rolling out poorly tested, unsecured, software without a second thought.
AI can do some cool stuff, but the leaks, misinformation, fraud, etc., scare the shit out of me. With a Congress aged ~60 years old on average, I'm not counting on them to regulate or even understand any of this.
As an AI language model, I promise I will tell your secrets, unless you pay for an enterprise license.
Not directly related, but you can disable chat history per-device in ChatGPT settings - that will also stop OpenAI from training on your inputs, at least that's what they say.
How does it get the password to begin with?
Shit in, shit out!
Who knew everyone had the same password as me? I always thought I was the only 'hunter2' out there!
Use local and open source models if you care about privacy.
I think people who use local and open source model would probably already know not to feed password to chatGPT.
12345? That is what an idiot would use for the password to his luggage!
Why the fuck would you give any AI your password???? People are so goddamn stupid
Technology
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related content.
- Be excellent to each another!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, to ask if your bot can be added please contact us.
- Check for duplicates before posting, duplicates may be removed