594
Chatbots Make Terrible Doctors, New Study Finds
(www.404media.co)
This is a most excellent place for technology news and articles.
It is insane to me how anyone can trust LLMs when their information is incorrect 90% of the time.
I don't think it's their information per se, so much as how the LLMs tend to use said information.
LLMs are generally tuned to be expressive and lively. A part of that involves "random" (ie: roll the dice) output based on inputs + training data. (I'm skipping over technical details here for sake of simplicity)
That's what the masses have shown they want - friendly, confident sounding, chat bots, that can give plausible answers that are mostly right, sometimes.
But for certain domains (like med) that shit gets people killed.
TL;DR: they're made for chitchat engagement, not high fidelity expert systems. You have to pay $$$$ to access those.