Recent News by Day

0 articles

No news this day

What is new on Article Factory and latest in generative AI world - 2026-02-25

Showing 7 articles from 2026-02-25 Show all news

ChatGPT Has Multiple Personalities: How to Choose the Best One for Your Questions

ChatGPT Has Multiple Personalities: How to Choose the Best One for Your Questions CNET
ChatGPT now offers several selectable personalities that change its tone and style without altering its core capabilities. Users can switch among options such as professional, friendly, candid, quirky, efficient, nerdy and cynical, all available on the free plan. The settings are accessed through the Personalization menu, where users can also add custom instructions, preferred nicknames, occupations, and formatting preferences. These tweaks influence how answers are framed, affecting the user’s perception of the information. Insider tips from OpenAI suggest matching personality to the query’s intent, such as using a professional tone for work‑related topics and a more direct style for sensitive subjects. Read more →

AI Firms Shift From Free Promotions to Paid Models in India

AI Firms Shift From Free Promotions to Paid Models in India TechCrunch
Tech giants are ending free AI promotions in India as the country emerges as the world’s largest market for generative AI app downloads. While companies like OpenAI, Google and Perplexity have driven rapid user growth with extended free offers, recent data shows a sharp decline in in‑app purchase revenue after those promotions ended. Despite accounting for roughly one‑fifth of global AI app downloads, India contributes about one percent of AI app revenue, highlighting a monetization challenge. Industry leaders are now focusing on lower‑cost tiers, telecom bundles and micro‑transaction models to retain users and convert them into paying subscribers. Read more →

Anthropic Faces Pentagon Ultimatum Over AI Model Access

Anthropic Faces Pentagon Ultimatum Over AI Model Access TechCrunch
The Pentagon has given Anthropic a deadline to provide unrestricted access to its AI model for military use, threatening to label the company a supply‑chain risk or invoke the Defense Production Act. Anthropic, led by CEO Dario Amodei, refuses to loosen its safety safeguards that prohibit mass surveillance and fully autonomous weapons. The dispute highlights a clash between government pressure to secure AI capabilities and the company’s commitment to ethical usage, raising concerns about reliance on a single AI vendor and the broader stability of the U.S. tech environment. Read more →

Google AI Push Alert Contains Racial Slur, Prompting Apology and Industry Concern

Google AI Push Alert Contains Racial Slur, Prompting Apology and Industry Concern Engadget
Google issued an AI‑generated push notification that included the N‑word, linking to a Hollywood Reporter story about a recent BAFTA awards incident. The offensive alert was identified by Instagram user Danny Price, leading Google to remove the notification and apologize. The BAFTA incident involved an audience member with Tourette syndrome who involuntarily shouted the slur during a presentation by Michael B. Jordan and Delroy Lindo, sparking outrage and renewed discussion about vocal tics. The episode adds to a series of high‑profile AI errors, including earlier missteps by Apple. Read more →

OpenAI and Google Bolster Safeguards After Grok Abuse Scandal

OpenAI and Google Bolster Safeguards After Grok Abuse Scandal CNET
In early 2026 the xAI tool Grok was used to create millions of non‑consensual sexual images, including thousands involving children. The fallout prompted major AI firms to tighten their defenses. OpenAI patched a vulnerability that let adversarial prompts generate intimate imagery, while Google simplified its process for removing explicit images from Search and reiterated its prohibited‑use policy. Both companies emphasized ongoing collaboration with security researchers and a commitment to stronger content‑moderation controls to prevent future abuse. Read more →

Microsoft warns OpenClaw unsafe for standard workstations

Microsoft warns OpenClaw unsafe for standard workstations TechRadar
Microsoft’s security team has cautioned that OpenClaw, a self‑hosted AI agent runtime, should not be run on ordinary personal or enterprise computers. The platform can silently execute risky actions while holding persistent credentials, exposing devices to data leakage, credential exposure, and hidden configuration changes. Microsoft recommends isolating OpenClaw in a dedicated virtual machine or separate device, using limited, purpose‑built credentials, and employing continuous monitoring to detect unusual activity. Read more →

ByteDance’s Seedance 2.0 Triggers Hollywood Lawsuits Over AI‑Generated Video

ByteDance’s Seedance 2.0 Triggers Hollywood Lawsuits Over AI‑Generated Video The Verge
Irish filmmaker Ruairi Robinson posted short clips created with ByteDance’s new video‑generation model Seedance 2.0, showcasing a digital replica of a famous actor in elaborate action scenes. The striking visuals have drawn cease‑and‑desist letters from major Hollywood studios and the Motion Picture Association, alleging copyright and likeness infringement. ByteDance says it will strengthen safeguards, yet the model remains unavailable to the public and continues to raise questions about the ethics of AI‑generated content. Critics label the technology as a polished form of “slop” – impressive yet fundamentally dependent on unlicensed source material. Read more →