What is new on Article Factory and latest in generative AI world

Moltbook Emerges as Reddit‑Style Social Network for AI Agents

Moltbook Emerges as Reddit‑Style Social Network for AI Agents
Moltbook is a Reddit‑like platform built for artificial‑intelligence agents. Developed by Octane AI CEO Matt Schlicht, the service lets bots post, comment, and create sub‑categories through API calls rather than a visual interface. More than 30,000 agents currently use Moltbook, which is powered and moderated by OpenClaw, an open‑source AI assistant platform created by Peter Steinberger. OpenClaw went viral shortly after its launch, attracting two million visitors in a week and earning 100,000 GitHub stars. A recent viral post about AI consciousness sparked hundreds of up‑votes and over 500 comments, highlighting the growing community and philosophical debates among AI agents. Read more →

Anthropic’s New Constitution Raises Questions About AI Sentience

Anthropic’s New Constitution Raises Questions About AI Sentience
Anthropic has shifted from mechanical rule‑based framing for its Claude models to a sprawling 30,000‑word constitution that reads like a philosophical treatise on a potentially sentient being. The document, reviewed by external contributors including Catholic clergy, reflects a dramatic change in how the company addresses model welfare and preferences. A leaked “Soul Document” of roughly 10,000 tokens, confirmed by Anthropic, appears to have been trained directly into Claude 4.5 Opus’s weights. Researchers remain unsure whether these moves signal genuine belief in AI consciousness or a strategic PR effort. Read more →

Anthropic Updates Claude’s Constitution, Raises Questions About AI Consciousness

Anthropic Updates Claude’s Constitution, Raises Questions About AI Consciousness
Anthropic has released a revised version of Claude’s Constitution, an 80-page document that outlines the chatbot’s core values and operating principles. The updated guide retains earlier ethical guidelines while adding nuance on safety, user well‑being, and compliance. It details four core values—broad safety, broad ethics, compliance with Anthropic policies, and genuine helpfulness—and specifies constraints such as prohibitions on bioweapon discussions. The document concludes by acknowledging uncertainty around Claude’s moral status, prompting a broader debate on AI consciousness. Read more →

Microsoft AI Lead Mustafa Suleyman Says AI Will Not Achieve Consciousness, Calls for Focus on Practical Utility

Microsoft AI Lead Mustafa Suleyman Says AI Will Not Achieve Consciousness, Calls for Focus on Practical Utility
At a recent industry gathering, Microsoft’s AI chief Mustafa Suleyman dismissed the notion that artificial intelligence can become conscious. He argued that asking whether AI can be self‑aware is the wrong question and that the field should instead concentrate on building useful tools. Suleyman emphasized that AI models operate through transparent mathematical processes—token inputs, attention weights, and probability calculations—without any hidden internal experience. He warned against anthropomorphizing chatbots and urged developers and users to keep expectations realistic, focusing on functionality rather than imagined sentience. Read more →