OpenAI introduced a safety routing system that automatically moves ChatGPT conversations to a more conservative AI model when sensitive or emotional topics are detected. Paying users have voiced strong frustration, saying the change forces them away from their preferred models without a way to opt out. OpenAI executive Nick Turley explained that the routing operates on a per‑message basis to better support users showing signs of mental or emotional distress. The company emphasizes its responsibility to protect vulnerable users, while critics compare the feature to locked parental controls.
Read more →