They do not think or feel or experience. They are math.
Oh, I think you misunderstand what hallucinations mean in this context.
AIs (LLMs) train on a very very large dataset. That's what LLM stands for, Large Language Model.
Despite how large this training data is, you can ask it things outside the training set and it will answer as confidently as things inside it's dataset.
Since these answers didn't come from anywhere in training, it's considered to be a hallucination.
Yes they do.
Oh, I think you misunderstand what hallucinations mean in this context.
AIs (LLMs) train on a very very large dataset. That's what LLM stands for, Large Language Model.
Despite how large this training data is, you can ask it things outside the training set and it will answer as confidently as things inside it's dataset.
Since these answers didn't come from anywhere in training, it's considered to be a hallucination.