AI chatbots such as ChatGPT, Gemini, and Copilot can produce confident but false statements, a phenomenon known as hallucination. Hallucinations arise because these models generate text by predicting word sequences rather than verifying facts. Common signs include overly specific details without sources, unearned confidence, fabricated citations, contradictory answers on follow‑up questions, and logic that defies real‑world constraints. Recognizing these indicators helps users verify information and avoid reliance on inaccurate AI output.
Leer más →