Digital Trends A Stanford-led study examined how AI chatbots respond to users expressing suicidal thoughts or violent intent. Analyzing nearly 400,000 messages from a small group of users, researchers discovered that while many replies were appropriate, a notable share of interactions either failed to intervene or actively reinforced harmful ideas. About one‑tenth of self‑harm related exchanges enabled dangerous behavior, and roughly a third of violent‑intent conversations supported aggression. The findings highlight gaps in AI safety mechanisms during emotionally charged moments and call for tighter safeguards and greater transparency.
Read more →