I think the average consumers are easily hitting that with current models... in part because the collapse of search engine functionality and human computer skills leads to the tech being used for extremely basic/common requests that are common enough that the answer was trained on a thousand times over.
Like, it might get 80% of answers correct because 85% of questions it is asked nowadays could have just been answered by what was the top answer on google 6 years ago, and that's already in the training data. Think "why is the sky blue?"
It is only "super users" that routinely ask it for rare or complex information synthesis (y'know, the key selling point of an LLM as an info source over a search engine) that force it up against the wall of "make shit up" more than 20% of the time.
I think the average consumers are easily hitting that with current models... in part because the collapse of search engine functionality and human computer skills leads to the tech being used for extremely basic/common requests that are common enough that the answer was trained on a thousand times over.
Like, it might get 80% of answers correct because 85% of questions it is asked nowadays could have just been answered by what was the top answer on google 6 years ago, and that's already in the training data. Think "why is the sky blue?"
It is only "super users" that routinely ask it for rare or complex information synthesis (y'know, the key selling point of an LLM as an info source over a search engine) that force it up against the wall of "make shit up" more than 20% of the time.