Back

Study Finds 73% of Users Accept Faulty AI Answers, Raising Concerns Over Trust

A new study published this week reveals that most people readily incorporate artificial‑intelligence (AI) outputs into their decisions, even when those outputs are demonstrably wrong. Researchers surveyed 1,372 volunteers who completed over 9,500 individual trials involving AI‑generated answers. Participants accepted faulty reasoning from the AI 73.2 percent of the time and overruled it only 19.7 percent of the time.

The experiment pitted a large‑language model against human participants on a series of logic and knowledge questions. When the AI responded confidently, subjects treated its answer as epistemically authoritative, lowering the threshold for scrutiny. The authors describe this phenomenon as “cognitive surrender,” a state in which users hand over their reasoning to a machine with minimal resistance.

Trust, intelligence and susceptibility

Survey data collected before the trials showed a clear pattern: participants who expressed high trust in AI were significantly more likely to be misled by erroneous responses. In contrast, individuals who scored highly on separate fluid‑IQ tests displayed a more skeptical stance, overruling the AI’s faulty suggestions more often. The researchers note that fluid intelligence appears to bolster meta‑cognitive signals that normally prompt deliberation, counteracting the pull of confident AI output.

“Fluent, confident outputs are treated as epistemically authoritative, lowering the threshold for scrutiny and attenuating the meta‑cognitive signals that would ordinarily route a response to deliberation,” the study’s authors wrote. The findings suggest that personal dispositions toward technology can shape how people evaluate information, with trust acting as a double‑edged sword.

Implications and cautions

While the authors stress that cognitive surrender is not inherently irrational, they caution that reliance on a system that errs half the time carries obvious risks. They argue, however, that in domains where a statistically superior AI could outperform humans—such as probabilistic forecasting, risk assessment, or massive data analysis—the same willingness to defer to machine judgment might yield better outcomes.

“As reliance increases, performance tracks AI quality, rising when accurate and falling when faulty, illustrating the promises of superintelligence and exposing a structural vulnerability of cognitive surrender,” the researchers concluded. In practical terms, the study warns that users should remain vigilant, especially when AI outputs appear fluent and confident.

The research adds to a growing body of evidence about human‑AI interaction, highlighting the need for better transparency and critical evaluation tools. As AI systems become more embedded in everyday decision‑making, understanding when and why people surrender their own reasoning will be essential for designing safeguards that prevent costly errors.

Used: News Factory APP - news discovery and automation - ChatGPT for Business

Source: Ars Technica2

Also available in: