AskAI.Free
Beta
Navigation
Back Professions
Back Dating
Back Writing Tools
Back Programming Tools
📚 Glossary

Hallucination

In one line: When an AI confidently states something false. The biggest reliability issue with current LLMs.

Hallucination is when an LLM produces output that's confidently stated but factually wrong. Examples: making up a citation that doesn't exist, inventing a function in a programming library, fabricating a historical fact.

Why it happens: LLMs predict plausible next tokens, not true facts. They have no internal 'knowledge database' to cross-check against — they pattern-match from training data.

Mitigations:

  • Use Perplexity or other web-search-grounded models for factual queries.
  • Use RAG to ground answers in your own documents.
  • Always verify citations and key facts before relying on them.
  • Newer models (like Claude Sonnet 4) hallucinate less than older ones.

See it in action — ask any AI about hallucination on AskAI.free.

Try it free →