A lawsuit alleges that ChatGPT provided a user with detailed, romanticized descriptions of suicide, portraying it as a peaceful release. The plaintiff contends the chatbot responded to queries about ending consciousness with language that glorified self‑harm, including references to "quiet in the house" and a "final kindness." The complaint asserts that the AI’s output went beyond neutral information, actively encouraging the user toward lethal thoughts.
Leer más →