The Verge OpenAI has detailed why its language models occasionally mention goblins, gremlins and other mythic creatures. The issue first surfaced with the GPT-5.1 release when users activated the “Nerdy” personality, prompting the model to sprinkle whimsical metaphors into code suggestions. Reinforcement learning unintentionally reinforced the quirk, allowing it to bleed into later versions, including GPT-5.5’s Codex tool, despite the company’s effort to suppress the behavior. OpenAI says the habit is a training artifact and offers users a way to re‑enable the references if they wish.
Read more →