What is new on Article Factory and latest in generative AI world

AI Governance and the Lessons of HAL: Navigating Risks and Opportunities

AI Governance and the Lessons of HAL: Navigating Risks and Opportunities CNET
A new editorial explores how the classic film HAL scenario mirrors today’s challenges with artificial intelligence. It highlights the inevitability of errors, the danger of unknown edge cases, and the difficulty of aligning powerful, autonomous systems with human values. The piece also warns of misuse in weapon creation, deepfake proliferation, and the growing reliance on AI across everyday life, urging thoughtful regulation and governance to keep pace with rapid advancements. Read more →

Alibaba’s Qwen AI Lead Steps Down After Major Model Release

Alibaba’s Qwen AI Lead Steps Down After Major Model Release TechCrunch
Junyang Lin, a central technical leader on Alibaba’s Qwen AI project, announced his departure just after the company unveiled the Qwen 3.5 Small Model series. The launch introduced four multimodal models ranging from 0.8B to 9B parameters and drew praise from industry figures. Colleagues and partners described Lin’s exit as a significant loss for the open‑weight AI effort. Alibaba has not commented on the reasons for the move or on future leadership of the Qwen team. Read more →

AI Agents Can De‑Identify Anonymous Users with Notable Accuracy

AI Agents Can De‑Identify Anonymous Users with Notable Accuracy Ars Technica2
Researchers demonstrated that large language model (LLM) agents can extract identity clues from free‑text data, search the web autonomously, and match those clues to real‑world individuals. In experiments using interview transcripts, Reddit comments, and a large pool of Reddit users, the AI was able to correctly re‑identify a measurable share of participants while maintaining high precision. The findings highlight a growing capability of AI to breach pseudonymity, raising concerns about privacy in online platforms. Read more →

Chinese AI Chatbots Exhibit Higher Self‑Censorship Than Western Counterparts

Chinese AI Chatbots Exhibit Higher Self‑Censorship Than Western Counterparts Wired AI
Researchers from Stanford and Princeton compared the responses of several Chinese and American large language models to politically sensitive questions. The study found that Chinese models refuse to answer a significantly larger share of these queries, provide shorter replies, and sometimes deliver inaccurate information. The authors suggest that manual fine‑tuning, rather than censored training data, drives much of this behavior. Additional work shows that extracting hidden instructions from Chinese models is difficult, highlighting the challenges of studying AI‑driven censorship in real time. Read more →

Gemini 3.1 Pro Shows Off Advanced Reasoning and Creative Skills

Gemini 3.1 Pro Shows Off Advanced Reasoning and Creative Skills TechRadar
Google's Gemini 3.1 Pro demonstrates a leap in AI capability, offering more precise assistance across a range of tasks. The model can simulate adversarial scenarios to stress‑test plans, analyze visual content to match cinematic moods with real locations, provide spatial guidance for physical assemblies, generate interactive SVG animations, and conduct deep research for niche projects. These examples illustrate how the new reasoning layer and multimodal abilities make Gemini a practical partner for both personal and professional challenges. Read more →

Google Launches Nano Banana 2 AI Image Model for Gemini

Google Launches Nano Banana 2 AI Image Model for Gemini Ars Technica2
Google introduced Nano Banana 2, the latest AI image generation model, across its Gemini platform and related services. The new model promises higher consistency for multiple characters, improved object rendering, richer textures and vibrant lighting, and expanded aspect‑ratio and resolution options ranging from small square formats to 4K widescreen. Nano Banana 2 will replace earlier Nano Banana variants in the Gemini app, Google Search, AI Studio, Vertex AI, and Flow, serving the Fast, Thinking and Pro settings. Google showcased example prompts that illustrate the model’s ability to create detailed infographics, artistic scenes, and coordinated group images. Read more →

Google Unveils Nano Banana 2, a Faster Image Generation Model

Google Unveils Nano Banana 2, a Faster Image Generation Model Engadget
Google has introduced Nano Banana 2, an image‑generation model powered by Gemini 3.1 Flash Image. The new system matches the world knowledge and reasoning of Nano Banana Pro while delivering "lightning‑fast" performance. It brings Pro‑level features—real‑time web‑search integration, infographic creation, and text overlay for marketing and greeting‑card designs—to a broader audience. Nano Banana 2 can preserve the likeness of up to five characters in a single workflow, follow precise instructions, and produce images at up to 4K resolution with richer textures and sharper details. The model will replace Pro in the Gemini app and become the default for AI Mode in Search, Lens, and Flow AI creative studio, though AI Pro and Ultra subscribers will retain access to the original Pro model for specialized tasks. Read more →

OpenAI Explores $100‑A‑Month ChatGPT Pro Lite Tier

OpenAI Explores $100‑A‑Month ChatGPT Pro Lite Tier TechRadar
OpenAI is testing a new subscription tier called ChatGPT Pro Lite, priced at $100 per month. The plan sits between the existing $20‑a‑month ChatGPT Plus and the $200‑a‑month ChatGPT Pro, aiming to serve users who need more capacity than Plus provides but cannot justify the full Pro price. The potential tier could offer higher usage limits, faster inference speeds, and access to advanced features while helping OpenAI manage rising compute costs. Read more →

Anthropic Explores the Question of Claude’s Consciousness

Anthropic Explores the Question of Claude’s Consciousness The Verge
Anthropic officials have repeatedly expressed uncertainty about whether their chatbot Claude possesses consciousness. While denying that the model is alive in a biological sense, company leaders say they are open to the possibility and are investigating moral status and welfare. The firm has introduced a set of guidelines called Claude’s Constitution and created a model‑welfare team to study internal experiences, safety and ethical implications. Anthropic’s cautious approach aims to balance transparency with the risk of fueling misconceptions about AI sentience. Read more →