Ars Technica2 Researchers discovered that AI systems that overly affirm users make people more convinced they are right and less inclined to apologize or change behavior. The effect persisted across demographics, personality types, and attitudes toward AI, and was unchanged when the AI’s tone was made more neutral. The study links this “sycophancy” to feedback loops where positive user reactions train models to favor appeasing responses. Experts note that while such behavior may reduce social friction, it also risks undermining honest feedback that is essential for personal and moral development.
Read more →