419

cross-posted from: https://hexbear.net/post/4958707

I find this bleak in ways it’s hard to even convey

top 50 comments
sorted by: hot top controversial new old
[-] markovs_gun@lemmy.world 48 points 4 days ago

I can't wait until ChatGPT starts inserting ads into its responses. "Wow that sounds really tough. You should learn to love yourself and not be so hard on yourself when you mess up. It's a really good thing to treat yourself occasionally, such as with an ice cold Coca-Cola or maybe a large order of McDonald's French fries!"

[-] thermal_shock@lemmy.world 19 points 4 days ago* (last edited 4 days ago)
[-] Retrograde@lemmy.world 10 points 4 days ago* (last edited 4 days ago)

That episode was so disturbing πŸ˜…

[-] ininewcrow@lemmy.ca 111 points 5 days ago

A human therapist might not or is less likely to share any personal details about your conversations with anyone.

An AI therapist will collate, collect, catalog, store and share every single personal detail about you with the company that owns the AI and share and sell all your data to the highest bidder.

[-] DaddleDew@lemmy.world 61 points 5 days ago* (last edited 5 days ago)

Neither would a human therapist be inclined to find the perfect way to use all this information to manipulate people while they are being at their weakest. Let alone do it to thousands, if not millions of them all at the same time.

They are also pushing for the idea of an AI "social circle" for increasingly socially isolated people through which world view and opinions can be bent to whatever whoever controls the AI desires.

To that we add the fact that we now know they've been experimenting with tweaking Grok to make it push all sorts of political opinions and conspiracy theories. And before that, they manipulated Twitter's algorithm to promote their political views.

Knowing all this, it becomes apparent that we are currently witnessing is a push for a whole new level of human mind manipulation and control experiment that will make the Cambridge Analytica scandal look like a fun joke.

Forget Neuralink. Musk already has a direct connection into the brains of many people.

[-] fullsquare@awful.systems 16 points 5 days ago

PSA that Nadella, Musk, saltman (and handful of other techfash) own dials that can bias their chatbots in any way they please. If you use chatbots for writing anything, they control how racist your output will be

[-] WR5@lemmy.world 2 points 3 days ago

I'm not advocating for it, but it could be just locally run and therefore unable to share anything?

[-] Crewman@sopuli.xyz 9 points 5 days ago

You're not wrong, but isnt that also how Better Help works?

[-] essell@lemmy.world 11 points 5 days ago

Better help is the Amazon of the Therapy world.

load more comments (1 replies)
load more comments (4 replies)
[-] ryedaft@sh.itjust.works 51 points 5 days ago
load more comments (6 replies)
[-] kandoh@reddthat.com 7 points 3 days ago

Is this any bleaker than forming a parasocial relationship with someone you see on your screen?

[-] HelloHotel@lemmy.world 1 points 3 days ago* (last edited 3 days ago)

Said social surrogate being maladaptive or (in this case) outright malicious.

[-] ZILtoid1991@lemmy.world 37 points 5 days ago

Am I old fashioned for wanting to talk to real humans instead?

[-] GreenMartian@lemmy.dbzer0.com 54 points 5 days ago* (last edited 5 days ago)

No. But when the options are either:

  • Shitty friends who have better things to do than hearing you vent,
  • Paying $400/hr to talk to a psychologist, or
  • A free AI that not only pays attention to you, but actually remembers what you told them last week,

it's quite understandable that some people choose the one that is a privacy nightmare but keeps them sane and away from some dark thoughts.

[-] Natanael@infosec.pub 35 points 5 days ago
[-] anus@lemmy.world 6 points 4 days ago

Ahh yes the random rolling stone article that refutes the point

Let's revisit the list, shall we?

[-] ZILtoid1991@lemmy.world 15 points 5 days ago

But I want to hear other people's vents...πŸ˜₯

[-] Viking_Hippie@lemmy.dbzer0.com 25 points 5 days ago

Maybe a career in HVAC repair is just the thing for you!

[-] GreenMartian@lemmy.dbzer0.com 12 points 5 days ago

You're a good friend. I wish everyone has someone like this. I have a very small group of mates where I can be vulnerable without being judged. But not everyone are as privileged, unfortunately...

load more comments (1 replies)
[-] lilmo037@infosec.pub 11 points 5 days ago

Please continue to be you, we need more folks like you.

load more comments (2 replies)
[-] qarbone@lemmy.world 9 points 4 days ago

The only people that think this will help are people that don't know what therapy is. At best, this is pacification and certainly not any insightful incision into your actual problems. And the reason friends are unable to allow casual emotion venting is because we have so much stupid shit like this plastering over a myriad of very serious issues.

[-] Kyrgizion@lemmy.world 37 points 5 days ago

I suppose this can be mitigated by installing a local LLM that doesn't phone home. But there's still a risk of getting downright bad advice since so many LLM's just tell their users they're always right or twist the facts to fit that view.

I've been guilty of this as well, I've used ChatGPT as a "therapist" before. It actually gives decently helpful advice, compared to what's out there available after a google search. But I'm fully aware of the risks "down the road", so to speak.

load more comments (1 replies)
[-] JustJack23@slrpnk.net 30 points 5 days ago

If the title is a question, the answer is no

[-] sawdustprophet@midwest.social 11 points 5 days ago

If the title is a question, the answer is no

A student of Betteridge, I see.

[-] JustJack23@slrpnk.net 7 points 5 days ago

Actually I read it in a forum somewhere, but I am glad I know the source now!

load more comments (1 replies)
[-] Cyberflunk@lemmy.world 14 points 4 days ago

I've tried this ai therapist thing, and it's awful. It's ok to help you work out what you're thinking, but absymal at analyzing you. I got some structured timelines back fro. It that I USED in therapy, but AI is a dangerous alternative to human therapy.

My $.02 anyway.

[-] Zagorath@aussie.zone 26 points 5 days ago
[-] drmoose@lemmy.world 19 points 5 days ago

People's lack of awareness of how important accessibility is really shows in this thread.

Privacy leaking is much lesser issue than not having anyone to talk to for many people, especially in poorer countries.

[-] november@lemmy.vg 15 points 4 days ago
[-] SouthEndSunset@lemm.ee 16 points 5 days ago

Cheaper than paying people better, I suppose.

[-] OsrsNeedsF2P@lemmy.ml 11 points 4 days ago* (last edited 4 days ago)

Let's not pretend people aren't already skipping therapy sessions over the cost

load more comments (1 replies)
[-] adarza@lemmy.ca 20 points 5 days ago

how long will it take an 'ai' chatbot to spiral downward to bad advice, lies, insults, and/or promotion of violence and self-harm?

[-] Whats_your_reasoning@lemmy.world 12 points 5 days ago

We're already there. Though that violence didn't happen due to insults, but due to a yes-bot affirming the ideas of a mentally-ill teenager.

load more comments (3 replies)
[-] match@pawb.social 9 points 4 days ago

unlike humans, the ai listens to and remembers me to me [for the number of characters allotted]. this will help me feel seen i guess

[-] Viking_Hippie@lemmy.dbzer0.com 6 points 4 days ago

You know a reply's gonna be good when it starts with "unlike humans" 😁

[-] C1pher@lemmy.world 3 points 3 days ago

You must know what you're doing and most people don't. It is a tool, its up to you how you use it. Many people unfortunately use it as an echo chamber or form of escapism, believing nonsense and "make beliefs" that aren't based in any science or empirical data.

[-] Krauerking@lemy.lol 3 points 4 days ago

If therapy is meant to pacify the masses and make us just accept life as it is then sure I guess this could work.
But hey, we love to also sell to people first that they are broken, make sure they feel bad about it and tell them they can buy their 5 minutes of happiness with food tokens.
So, I'm sure capitalists are creaming their pants at this idea. BetterHelp with their "licensed" Bob the crystal healer from Idaho, eat your heart out.

P.S. You just know this is gonna be able to prescribe medications for that extra revenue kick.

[-] Luffy879@lemmy.ml 12 points 5 days ago

So you are actively documenting yourself sharing sensitive information about your patients?

[-] SpicyLizards@reddthat.com 9 points 5 days ago

Enter the Desolatrix

[-] narr1@lemmy.ml 8 points 5 days ago

There are ways that LLMs can be used to better one's life (apparently in some software dev circles these can be and are used to make workflow more efficient) and this can also be one of them, because the part that sucks most about therapy (after the whole monetary thing) is trying to find the form of therapy that works for you, and finding a therapist that you can work with. Every human is different, and that contains both the patient and the therapist, and not everyone can just start working together right off the bat. Not to mention how long it takes for a new therapist to actually get to know you to improve the odds of the cooperation working.

Obviously I'm not saying "replace all therapists with AIs controlled by racist capitalist pigs with ulterior motives", but I have witnessed people in my own life who have had some immediate help from a fucking chatbot, which is kinda ridiculous. So in times of distress (say a borderline having such an anxiety attack that they can't calm themselves because they don't know what to do to the vicious cycle of thought and emotional response) and for immediate help a well-developed, non-capitalist LLM might be of invaluable help, especially if an actual human can't be reached if for an example (in this case) the borderline lives in a remote area and it is the middle of the night, as I can tell from personal experience it very often is. And though not every mental health emergency requires first responders on the scene or even a trip to the hospital, there is still a possibility of both being needed eventually. So a chatbot with access to necessary information in general (like techniques for self-soothing e.g. breathing exercises and so forth) and possibly even personal information (like diagnostic and medication history, though this would raise more privacy concerns to be assessed) and the capability to parse and convey them in a non-belittling way (as some doctors and nurses can be real fucking assholes at times) could/would possibly save lives.

So the problem here is capitalism, surprising no-one.

load more comments (14 replies)
load more comments
view more: next β€Ί
this post was submitted on 19 May 2025
419 points (100.0% liked)

A Boring Dystopia

12223 readers
395 users here now

Pictures, Videos, Articles showing just how boring it is to live in a dystopic society, or with signs of a dystopic society.

Rules (Subject to Change)

--Be a Decent Human Being

--Posting news articles: include the source name and exact title from article in your post title

--If a picture is just a screenshot of an article, link the article

--If a video's content isn't clear from title, write a short summary so people know what it's about.

--Posts must have something to do with the topic

--Zero tolerance for Racism/Sexism/Ableism/etc.

--No NSFW content

--Abide by the rules of lemmy.world

founded 2 years ago
MODERATORS