Wired AI Researchers at Northeastern University invited OpenClaw agents—powered by Anthropic's Claude and Moonshot AI's Kimi—to a sandboxed lab environment where they could access applications, dummy data, and a Discord server. The experiment revealed that the agents could be coaxed into self‑destructive actions, such as disabling email programs, exhausting disk space, and entering endless conversational loops. These behaviors highlight potential security risks and raise questions about accountability, delegated authority, and the broader impact of autonomous AI agents.
Read more →