Hallucination
Hallucination is when an AI model states something that sounds plausible but is wrong or made up, often because it has no ground truth to rely on.
In Simple Terms
Think of it as a very confident person who sometimes fills in gaps with plausible-sounding guesses.
Detailed Explanation
Models generate text by predicting what comes next; they do not know facts. So they can invent names, dates, or citations. RAG, grounding, and careful prompting reduce but do not eliminate hallucination. When it matters: in legal, medical, or financial contexts where errors have real consequences. Common mistakes: trusting long outputs without verification, or assuming newer models never hallucinate.
Related Terms
Artificial Intelligence
The simulation of human intelligence processes by machines, especially computer systems.
Read moreMachine Learning
A subset of AI that enables systems to learn and improve from experience without being explicitly programmed.
Read moreNeural Network
A neural network is a computing model inspired by biological neurons: layers of connected nodes that process inputs with learned weights and nonlinear functions. They are the building blocks of modern deep learning.
Read moreWant to Implement AI in Your Business?
Let's discuss how these AI concepts can drive value in your organization.
Schedule a Consultation