foundationPrompt Craftbeginner

hallucination

/huh-LOO-sih-NAY-shun/

When an AI generates information that sounds plausible but is factually incorrect — invented citations, made-up statistics, or confidently wrong claims.

Impact
Universality
Depth

AI hallucination is when a language model generates text that sounds authoritative and plausible but is completely fabricated. The AI doesn't 'know' it's lying — it's producing the most statistically likely next tokens, and sometimes those tokens form convincing falsehoods. Studies show chatbots hallucinate in roughly 27% of responses, with 46% containing some factual error.

Hallucinations are especially dangerous because they're confident. The AI won't hedge or say 'I'm not sure' — it'll cite a paper that doesn't exist with a DOI that looks real. This is why the word matters: when you can name the failure mode, you can guard against it.

The antidotes are: chain-of-thought prompting (forces the AI to show reasoning), retrieval-augmented generation (grounds answers in real documents), and explicit instructions like 'If you're not sure, say so.'

When to Use It

When reviewing AI output for accuracy, building guardrails into AI systems, or explaining why AI isn't always trustworthy.

Try This Prompt

$ Flag anything you're less than 90% confident about. I'd rather have gaps than hallucinations.

Why It Matters

You can't trust AI output you can't verify. Hallucination awareness is the line between AI-assisted and AI-misled.

Memory Trick

The AI is hallucinating — seeing things that aren't there, just like a person with a fever.

Example Prompts

Review this AI-generated content for hallucinations — verify every claim and citation
If you don't have enough information to answer accurately, say 'I don't know' rather than guessing
Ground your response in the documents I provided. Don't add information that isn't in the source material.
List your confidence level (high/medium/low) next to each claim

Common Misuses

  • ×Calling any AI mistake a 'hallucination' — if the AI misunderstands your prompt, that's a prompt problem, not hallucination
  • ×Using 'hallucination' to dismiss AI entirely — it's a manageable risk, not a fatal flaw
  • ×Thinking more data eliminates hallucination — even well-trained models hallucinate

Related Power Words

A Mac app that coaches your AI vocabulary daily

Become a Better AI Communicator