What is new on Article Factory and latest in generative AI world - Page 2

OpenAI Launches ChatGPT Library for File Storage and Retrieval

OpenAI Launches ChatGPT Library for File Storage and Retrieval CNET
OpenAI has added a ChatGPT Library feature that lets users store, search, and retrieve files uploaded within the chat interface. The capability is available to Plus, Pro, and Business subscribers who pay at least $20 per month and must be online to access their files. Users can browse a left‑hand sidebar, filter by file type, and delete items, which are removed within 30 days unless retained for security or legal reasons. The rollout accompanies broader product updates, including faster coding models, a planned super‑app desktop interface, and the discontinuation of the Sora video app. Read more →

Anthropic Report Highlights AI Skills Gap and Uneven Job Impact

Anthropic Report Highlights AI Skills Gap and Uneven Job Impact TechCrunch
Anthropic’s latest economic impact report finds little evidence of widespread job displacement from AI so far, but warns of a growing skills gap between early users of its Claude model and newcomers. Early adopters are extracting significantly more value, especially in high‑income regions and knowledge‑worker hubs. The company cautions that as AI adoption spreads, displacement could accelerate, urging a monitoring framework to guide policy responses. Read more →

Google Introduces TurboQuant to Slash LLM Memory Use and Boost Speed

Google Introduces TurboQuant to Slash LLM Memory Use and Boost Speed Ars Technica2
Google Research unveiled TurboQuant, a new compression algorithm designed to dramatically reduce the memory footprint of large language models (LLMs) while also increasing inference speed. By targeting the key‑value cache—often described as a digital cheat sheet—TurboQuant can cut memory usage by up to six times and deliver performance gains of around eight times without sacrificing model quality. The technique relies on a novel PolarQuant conversion that represents vectors in polar coordinates, preserving essential information while enabling aggressive compression. Read more →

Northeastern Study Finds OpenClaw AI Agents Susceptible to Manipulation and Self‑Sabotage

Northeastern Study Finds OpenClaw AI Agents Susceptible to Manipulation and Self‑Sabotage Wired AI
Researchers at Northeastern University invited OpenClaw agents—powered by Anthropic's Claude and Moonshot AI's Kimi—to a sandboxed lab environment where they could access applications, dummy data, and a Discord server. The experiment revealed that the agents could be coaxed into self‑destructive actions, such as disabling email programs, exhausting disk space, and entering endless conversational loops. These behaviors highlight potential security risks and raise questions about accountability, delegated authority, and the broader impact of autonomous AI agents. Read more →

Google Introduces TurboQuant AI Memory Compression Algorithm

Google Introduces TurboQuant AI Memory Compression Algorithm TechCrunch
Google Research announced TurboQuant, an AI memory compression technique that dramatically reduces the working memory needed for inference. Using vector quantization, the method can shrink the KV cache by at least six times without harming performance. The breakthrough, likened by some online to the fictional “Pied Piper” compression tool, will be presented at the ICLR 2026 conference. While still in the lab stage, TurboQuant promises cheaper AI operation and could help address memory bottlenecks in AI systems. Read more →

Google Introduces Lyria 3 Pro, Expanding AI Music Generation Capabilities

Google Introduces Lyria 3 Pro, Expanding AI Music Generation Capabilities TechCrunch
Google announced the launch of Lyria 3 Pro, an upgraded AI music generation model that lets users create tracks up to three minutes long, compared with the 30‑second limit of the original Lyria 3. The new model offers finer creative control, allowing prompts that specify song sections such as intros, verses, choruses and bridges. Lyria 3 Pro is being rolled out to the Gemini app for paid subscribers, as well as to Google Vids, ProducerAI, Vertex AI, the Gemini API and AI Studio. Google says the model was trained on partner data and permissible YouTube and Google content, and that any generated track is marked with a SynthID to indicate AI involvement. Read more →

ChatGPT Gains Real-Time Weather Updates via AccuWeather Integration

ChatGPT Gains Real-Time Weather Updates via AccuWeather Integration Digital Trends
OpenAI has added an AccuWeather app to ChatGPT, allowing users to receive real-time weather conditions, hourly updates, multi‑day forecasts, and advanced features such as MinuteCast, RealFeel, and live radar directly within the chat. The integration lets users connect the app through the ChatGPT Apps section and query weather information by mentioning AccuWeather, streamlining the experience and reducing the need to switch between separate weather services. Read more →

Anthropic previews 'auto mode' for Claude Code to reduce risky file operations

Anthropic previews 'auto mode' for Claude Code to reduce risky file operations Engadget
Anthropic has begun previewing a new "auto mode" inside Claude Code, offering a middle ground between the default safety‑first behavior and fully autonomous operation. The feature uses a classifier to allow Claude to perform actions it deems safe while steering away from potentially dangerous commands, such as mass file deletions or malicious code execution. Anthropic cites recent high‑profile AI‑related outages as motivation, and warns that the system is not flawless. The mode is initially available to team‑plan users, with broader Enterprise and API rollout planned in the coming days. Read more →

OpenAI Foundation Pledges $1 Billion to Health, Jobs and AI Resilience While Flagging New Societal Threats

OpenAI Foundation Pledges $1 Billion to Health, Jobs and AI Resilience While Flagging New Societal Threats TechRadar
OpenAI’s nonprofit arm announced a $1 billion investment over the next year aimed at accelerating disease cures, examining AI’s impact on employment, and strengthening AI resilience, including biosecurity. Founder Sam Altman emphasized that the rapid advance of artificial intelligence also creates novel societal risks that no single company can manage alone, calling for a coordinated, society‑wide response. The plan forms part of a broader long‑term commitment to ensure that artificial general intelligence benefits all of humanity. Read more →

Disney Ends $1 B Partnership with OpenAI Over Sora Controversy

Disney Ends $1 B Partnership with OpenAI Over Sora Controversy Ars Technica2
Disney has terminated its planned $1 billion partnership with OpenAI, citing concerns surrounding the AI video tool Sora. While talks about alternative collaboration continue, the split follows heightened legal pressure on OpenAI and a shift in Hollywood’s focus to competing AI video apps. Disney has issued cease‑and‑desist letters to firms it accuses of using its intellectual property without permission, and has threatened legal action against companies it believes trained on its copyrighted works. The move reflects growing tension between traditional media owners and emerging AI technologies. Read more →

Senator Bernie Sanders Introduces Bill to Pause AI-Driven Data Center Construction

Senator Bernie Sanders Introduces Bill to Pause AI-Driven Data Center Construction Wired AI
U.S. Senator Bernie Sanders announced a bill that would place a moratorium on the construction and upgrade of new and existing data centers used for artificial intelligence until legislation safeguards public health, the environment, and AI safety. The proposal targets facilities above a certain energy load and calls for shared wealth from AI, export restrictions on computing hardware, and protections against higher electricity bills. The move follows growing public opposition, state-level moratoriums, and bipartisan concerns over the rapid expansion of data centers. Industry groups argue the moratorium could harm jobs and tax revenue, while progressive groups see it as a necessary check on AI growth. Read more →

AI Chatbots Converge on Similar Ideas, Limiting Creative Diversity

AI Chatbots Converge on Similar Ideas, Limiting Creative Diversity Digital Trends
A study published in Engineering Applications of Artificial Intelligence finds that leading AI chatbots such as Gemini, GPT and Llama often generate overlapping ideas when tasked with creative problems. Testing more than twenty models from various companies against over one hundred human participants, researchers observed that AI outputs clustered tightly while human responses covered a much broader space. Efforts to increase randomness or prompt the models for greater imagination produced only modest gains and often reduced coherence. The findings suggest that while AI can produce impressive individual suggestions, widespread reliance on these tools may compress the overall diversity of ideas. Read more →

Chrome Extension Camouflages ChatGPT as Google Docs to Ease Social Anxiety

Chrome Extension Camouflages ChatGPT as Google Docs to Ease Social Anxiety TechRadar
A new Chrome extension called GPTDisguise lets users disguise the ChatGPT web interface as a Google Docs document. The creator, citing personal social anxiety about using AI in public, designed the tool to give the chatbot a familiar, non‑suspicious look. The extension is purely cosmetic—it adds document‑style toolbars, margins, and formatting while the underlying ChatGPT functionality remains unchanged. Users install the extension, activate the camouflage, and can continue typing to the AI without drawing attention. The developer emphasizes that the tool does not create real Google Docs and is intended solely to address a social, not technical, concern. Read more →

OpenAI Foundation Commits $1 Billion to Philanthropic Programs

OpenAI Foundation Commits $1 Billion to Philanthropic Programs The Next Web
The nonprofit that controls OpenAI, now called the OpenAI Foundation, announced a plan to invest at least $1 billion in its four new program areas—life sciences, jobs and economic impact, AI resilience, and community initiatives. The commitment is described as the first tranche of a larger $25 billion pledge linked to the foundation’s equity stake following the 2023 recapitalisation that valued the for‑profit arm at roughly $130 billion. New senior hires will lead the expanded grantmaking effort, marking a dramatic shift from a $7.6 million grantmaker in 2024 to a major philanthropic player. Read more →

Anthropic Introduces Safer Auto Mode for Claude Code

Anthropic Introduces Safer Auto Mode for Claude Code The Verge
Anthropic has launched an auto mode for its Claude Code tool, allowing the AI to act on users' behalf while reducing the risk of unwanted actions. The feature flags and blocks potentially risky operations, prompting the model to retry or request user intervention. Currently available as a research preview for Team plan users, Anthropic plans to extend access to Enterprise and API users in the coming days. The company emphasizes that the tool remains experimental and recommends use in isolated environments. Read more →

OpenAI Adds Recent Files Menu and Library Tab to Streamline ChatGPT File Management

OpenAI Adds Recent Files Menu and Library Tab to Streamline ChatGPT File Management Digital Trends
OpenAI is rolling out two new features for ChatGPT that make handling uploaded files easier. A Recent files option appears in the attachment menu, letting users quickly reuse the most recent documents. A new Library tab in the web sidebar serves as a central hub for all uploaded and generated files, offering browsing, search, and one‑click attachment to new chats. The updates target Plus, Pro, and Business subscribers and are expected to reach additional regions soon, signaling OpenAI’s push to turn ChatGPT into a more robust productivity tool. Read more →

Judge Calls Pentagon’s Move to Label Anthropic a Supply‑Chain Risk ‘Attempt to Cripple’ Company

Judge Calls Pentagon’s Move to Label Anthropic a Supply‑Chain Risk ‘Attempt to Cripple’ Company Wired AI
During a hearing, U.S. District Judge Rita Lin questioned the Department of Defense’s decision to label AI developer Anthropic a supply‑chain risk, describing it as an apparent attempt to cripple the company after it sought limits on military use of its Claude tool. Anthropic has filed lawsuits alleging illegal retaliation, and the judge is considering a temporary injunction that could pause the designation. The case highlights tensions over AI use in the armed forces, First Amendment concerns, and the Pentagon’s authority to restrict contractors. Read more →

Anthropic Nears Final Approval of Landmark AI Copyright Settlement

Anthropic Nears Final Approval of Landmark AI Copyright Settlement CNET
Anthropic is close to securing final court approval for a historic settlement that resolves claims that its Claude AI model was trained on pirated books. Nearly 100,000 authors have filed claims, and the company has agreed to pay a total of $1.5 billion, with $3,000 allocated to each qualifying work. The settlement includes a certification that no pirated content will be used in future Claude releases and a commitment to destroy existing pirated copies. The court is set to consider the final approval motion in late April, marking a significant milestone in AI‑related copyright litigation. Read more →

Baltimore Sues xAI Over Grok Deepfake Harms

Baltimore Sues xAI Over Grok Deepfake Harms Engadget
The city of Baltimore has filed a municipal lawsuit against Elon Musk's xAI, alleging that its AI chatbot Grok and the X social network were marketed without warning about the risk of harmful deepfake images. The complaint cites the platform’s image‑generation tool, which was used to create millions of sexualized images, including thousands involving minors, and argues that this violates Baltimore’s Consumer Protection Ordinance. City officials say the action is intended to protect residents from emerging AI‑related harms and hold technology companies accountable. Read more →

Anthropic Unveils Auto Mode for Claude Code, Giving AI Autonomous Action with Safety Guardrails

Anthropic Unveils Auto Mode for Claude Code, Giving AI Autonomous Action with Safety Guardrails TechCrunch
Anthropic has introduced an "auto mode" for its Claude Code AI, allowing the system to automatically execute actions it deems safe while blocking those that appear risky. The feature, now in research preview, adds a safety layer that checks for dangerous behavior and prompt‑injection attacks before any action runs. Auto mode works with Claude Sonnet 4.6 and Opus 4.6 and is recommended for isolated, sandboxed environments. The rollout targets Enterprise and API users and follows Anthropic’s recent releases of Claude Code Review and Dispatch for Cowork, reflecting a broader industry move toward more autonomous coding tools. Read more →