That's not haywire. We already know AI makes stuff up and gets stuff wrong all the time. Putting it in an important position doesn't make it any less likely to make mistakes - this was inevitable.
LLMs should never be used for therapy.
That's what you get for trustnig an AI more than a professional.
It's prohibitively expensive to get proper therapy, and that's if your therapist has an opening in the next six months.
So it is better to use an AI therapist that suggests suicide?
If phrased like that obviously not, but that's now how those things are marketed. The average person might just stumble upon AI "therapy" while googling normal therapy options, and with the way the media just repeats AI bro talking points that wouldn't necessarily raise red flags for most "normal" people. Blame the scammer, not the scammed.
Dr. Sbaitso never asked me to commit atrocities.
Fuck AI
"We did it, Patrick! We made a technological breakthrough!"
A place for all those who loathe AI to discuss things, post articles, and ridicule the AI hype. Proud supporter of working people. And proud booer of SXSW 2024.