/huh-LOO-sih-NAY-shun/
When an AI generates information that sounds plausible but is factually incorrect — invented citations, made-up statistics, or confidently wrong claims.
AI hallucination is when a language model generates text that sounds authoritative and plausible but is completely fabricated. The AI doesn't 'know' it's lying — it's producing the most statistically likely next tokens, and sometimes those tokens form convincing falsehoods. Studies show chatbots hallucinate in roughly 27% of responses, with 46% containing some factual error.
Hallucinations are especially dangerous because they're confident. The AI won't hedge or say 'I'm not sure' — it'll cite a paper that doesn't exist with a DOI that looks real. This is why the word matters: when you can name the failure mode, you can guard against it.
The antidotes are: chain-of-thought prompting (forces the AI to show reasoning), retrieval-augmented generation (grounds answers in real documents), and explicit instructions like 'If you're not sure, say so.'
When reviewing AI output for accuracy, building guardrails into AI systems, or explaining why AI isn't always trustworthy.
You can't trust AI output you can't verify. Hallucination awareness is the line between AI-assisted and AI-misled.
The AI is hallucinating — seeing things that aren't there, just like a person with a fever.
A Mac app that coaches your AI vocabulary daily