AlignTrust
AI Fundamentals

AI Hallucination

When an AI model generates confident, plausible-sounding output that is factually incorrect, fabricated, or not grounded in reality.

What Is an AI Hallucination?

An AI hallucination occurs when a language model generates output that is factually incorrect, invented, or not grounded in the data it was trained on — while presenting it with the same confidence as accurate information. The term comes from the psychological concept of perceiving something that isn't there.

LLMs don't retrieve facts from a database; they predict what text should come next based on patterns in training data. When faced with a question outside their knowledge, they may generate a plausible-sounding but false answer rather than saying "I don't know."

Examples of Hallucinations

  • Citing academic papers, legal cases, or news articles that don't exist
  • Inventing biographical details about real people
  • Generating plausible but incorrect code that appears to work
  • Fabricating statistics, dates, or technical specifications
  • Making up names of products, laws, or organisations

In 2023, two US lawyers submitted a court brief containing AI-generated citations to cases that did not exist — a real-world consequence of hallucination.

Why Hallucinations Happen

LLMs learn to generate statistically plausible text, not to retrieve verified facts. Without access to a ground-truth knowledge base, the model fills gaps with confident-sounding content. Hallucinations are more common:

  • On topics underrepresented in training data
  • When asked for specific details (dates, numbers, names)
  • When prompted to generate authoritative-sounding content

The Security and Business Risks

Incorrect decisions: Hallucinated information in business reports, legal documents, or technical analyses can lead to poor decisions.

Fabricated code: AI-generated code that appears functional but contains bugs, security vulnerabilities, or incorrect logic.

Compliance exposure: Inaccurate AI-generated regulatory or legal guidance followed without expert review.

Reputational damage: Incorrect AI-generated content published externally.

Reducing Hallucination Risk

  1. Always verify AI-generated facts using authoritative sources before using them
  2. Use retrieval-augmented generation (RAG): Ground AI responses in verified documents
  3. Request citations: Ask the AI to cite sources, then verify those sources exist
  4. Implement human review: For high-stakes content, require expert review before use
  5. Use domain-specific fine-tuned models: Better performance on specific topics reduces hallucination rates