What is new on Article Factory and latest in generative AI world

AI Agents Can De‑Identify Anonymous Users with Notable Accuracy

AI Agents Can De‑Identify Anonymous Users with Notable Accuracy Ars Technica2
Researchers demonstrated that large language model (LLM) agents can extract identity clues from free‑text data, search the web autonomously, and match those clues to real‑world individuals. In experiments using interview transcripts, Reddit comments, and a large pool of Reddit users, the AI was able to correctly re‑identify a measurable share of participants while maintaining high precision. The findings highlight a growing capability of AI to breach pseudonymity, raising concerns about privacy in online platforms. Read more →

Perplexity Launches “Computer” AI Agent Platform with Cloud‑Based, Curated Integrations

Perplexity Launches “Computer” AI Agent Platform with Cloud‑Based, Curated Integrations Ars Technica2
Perplexity introduced Computer, an AI agent that can assign tasks to other AI agents. Operating primarily in the cloud, the service runs within a controlled environment that limits integrations to vetted plugins. Users can supply context through files such as USER.MD, MEMORY.MD, SOUL.MD, and HEARTBEAT.MD, allowing the agent to create, modify, or delete files on the user’s system. While the design aims to temper the wild capabilities seen in tools like OpenClaw, Perplexity acknowledges that large‑language‑model errors and security concerns remain, especially when the agent works with unbacked‑up data. Read more →

Chinese AI Chatbots Exhibit Higher Self‑Censorship Than Western Counterparts

Chinese AI Chatbots Exhibit Higher Self‑Censorship Than Western Counterparts Wired AI
Researchers from Stanford and Princeton compared the responses of several Chinese and American large language models to politically sensitive questions. The study found that Chinese models refuse to answer a significantly larger share of these queries, provide shorter replies, and sometimes deliver inaccurate information. The authors suggest that manual fine‑tuning, rather than censored training data, drives much of this behavior. Additional work shows that extracting hidden instructions from Chinese models is difficult, highlighting the challenges of studying AI‑driven censorship in real time. Read more →