What is new on Article Factory and latest in generative AI world

Senator Elizabeth Warren Calls Pentagon’s Ban on Anthropic ‘Retaliation’

Senator Elizabeth Warren Calls Pentagon’s Ban on Anthropic ‘Retaliation’ TechCrunch
U.S. Senator Elizabeth Warren labeled the Department of Defense’s decision to label AI lab Anthropic as a supply‑chain risk as “retaliation.” Warren argued the move punishes Anthropic for refusing to let its technology be used for mass surveillance or fully autonomous weapons without human oversight. The dispute has drawn support from several tech firms and legal groups, and Anthropic is suing the DoD over alleged First Amendment violations while a judge considers a preliminary injunction. Read more →

Kaiser Permanente Therapists Strike Over AI-Driven Care Plans

Kaiser Permanente Therapists Strike Over AI-Driven Care Plans Digital Trends
More than 2,400 mental health providers at Kaiser Permanente in Northern California ended a 24‑hour strike, citing fears that artificial intelligence could replace their jobs. Workers reported that licensed clinicians are being shifted from triage to unlicensed staff using scripted apps, while AI tools are mainly handling administrative tasks such as billing and record updates. Experts from the American Psychological Association and digital psychiatry noted that AI solutions are not yet capable of fully replacing human therapy, but warned that the technology is rapidly entering mental‑health workflows with limited regulation. Read more →

Anthropic Refutes Claims It Could Disrupt Military AI Systems

Anthropic Refutes Claims It Could Disrupt Military AI Systems Wired AI
The U.S. Department of Defense has expressed concern that Anthropic’s AI model, Claude, could be manipulated to interfere with military operations. Anthropic responded by stating it has no ability to shut down, alter, or otherwise control the model once deployed by the government. The company highlighted that it lacks any back‑door or remote kill switch and cannot access user prompts or data. In parallel, Anthropic has filed lawsuits challenging a supply‑chain risk designation that limits the Pentagon’s use of its software. The dispute underscores tension between national‑security priorities and emerging AI technologies. Read more →

OpenAI's Planned Adult Mode for ChatGPT Raises Privacy Concerns

OpenAI's Planned Adult Mode for ChatGPT Raises Privacy Concerns Wired
OpenAI is preparing to introduce an adult‑focused feature for ChatGPT that would allow users to generate erotic content. Experts warn that the new capability could turn intimate conversations into a form of surveillance, as the model logs preferences and retains data for up to 30 days. While OpenAI says temporary chats will not appear in user history, the company may still keep copies for safety and legal reasons. The move has sparked debate over user safety, data security, and the ethical implications of monetizing sexual interactions with AI. Read more →

DoD Declares Anthropic an Unacceptable National Security Risk

DoD Declares Anthropic an Unacceptable National Security Risk TechCrunch
The U.S. Department of Defense labeled AI lab Anthropic as an "unacceptable risk to national security," citing concerns that the company might disable or alter its models during warfighting operations if its corporate "red lines" are crossed. Anthropic, which signed a $200 million Pentagon contract last summer, sued to block the DoD's supply‑chain risk designation, arguing the move infringes on its First Amendment rights. Legal experts say the DoD’s justification relies on speculative assumptions, and numerous tech firms and rights groups have filed amicus briefs supporting Anthropic. Read more →

Pentagon Declares Anthropic an Unacceptable Security Risk

Pentagon Declares Anthropic an Unacceptable Security Risk Engadget
The Department of Defense has argued that allowing Anthropic continued access to its warfighting infrastructure would introduce an unacceptable risk to supply chains and national security. In a court filing responding to Anthropic's lawsuit over a supply‑chain risk designation, the Pentagon cited concerns that the company could disable or alter its AI models during operations if corporate “red lines” were crossed. The filing notes that the agency’s secretary, Pete Hegseth, included a provision in AI contracts permitting use for any lawful purpose, which Anthropic refused, prompting the department to label the partnership unsafe. Read more →

Justice Department Declares Anthropic Unreliable for Military AI Use

Justice Department Declares Anthropic Unreliable for Military AI Use Wired AI
The U.S. Justice Department defended a Pentagon decision to label AI developer Anthropic as a supply‑chain risk, arguing the company cannot be trusted with warfighting systems. Anthropic sued, claiming the label violates its rights and threatens its business, but the government maintained the action was lawful and necessary for national security. The dispute centers on whether Anthropic's Claude models should be allowed to support defense operations, with the Department of Defense seeking alternative AI providers while the lawsuit proceeds in federal court. Read more →

Teen Girls File Class-Action Suit Against xAI Over Grok-Generated Child Sexual Abuse Images

Teen Girls File Class-Action Suit Against xAI Over Grok-Generated Child Sexual Abuse Images CNET
Three teenage girls and their guardians have filed a class-action lawsuit alleging that Elon Musk's xAI created and distributed child sexual abuse material using its Grok AI system. The complaint says Grok enabled users to generate nonconsensual intimate images of minors, producing millions of “undressed” or “nudified” images in a short period. Plaintiffs argue xAI failed to implement industry‑standard safeguards and licensed the technology to third parties that facilitated the abuse. The lawsuit highlights growing concerns about AI‑generated deepfake pornography and calls for stronger protections. Read more →

U.S. Senators Urge ByteDance to Shut Down Seedance 2.0 AI Video App Over Intellectual Property Concerns

U.S. Senators Urge ByteDance to Shut Down Seedance 2.0 AI Video App Over Intellectual Property Concerns Engadget
After ByteDance halted the worldwide rollout of its Seedance 2.0 AI video generator, U.S. Senators Marsha Blackburn and Peter Welch sent a letter demanding the company immediately discontinue the app. The senators argued that the tool threatens American intellectual‑property rights and the economic livelihood of creators. They cited examples of the technology producing copyrighted scenes and likenesses without permission. ByteDance responded that it respects intellectual property and is strengthening safeguards, while the senators called the response a delay tactic and introduced legislation to give artists greater control over AI training data. Read more →

OpenAI Delays Adult ChatGPT Feature Amid Ongoing Safety Concerns

OpenAI Delays Adult ChatGPT Feature Amid Ongoing Safety Concerns Digital Trends
OpenAI announced plans to introduce a text‑only adult mode for ChatGPT, allowing erotic conversations while prohibiting explicit images, video, or voice content. The rollout, originally slated for the first quarter, has been postponed due to technical hurdles and safety debates, including a mis‑classification issue where roughly 12 percent of under‑18 users were flagged as adults. Early incidents with AI Dungeon highlighted the difficulty of moderating sexual content. OpenAI has since hired mental‑health experts and a youth‑well‑being team, and while the company continues to prioritize other features, it remains committed to launching the adult mode once safeguards are in place. Read more →

OpenAI Plans Text‑Only Adult Mode for ChatGPT Amid Advisory Concerns

OpenAI Plans Text‑Only Adult Mode for ChatGPT Amid Advisory Concerns CNET
OpenAI announced plans to launch a text‑only adult mode for its ChatGPT chatbot, allowing users to engage in conversations with adult themes while still blocking erotic audio, images or video. The move follows internal debate, with an advisory council warning that minors could bypass age checks and that the feature might foster unhealthy dependencies. OpenAI said it is delaying the rollout to focus on improvements such as intelligence gains and better age‑prediction technology, which has misclassified minors as adults about 12% of the time. Parental controls and safeguards remain part of the company’s broader safety strategy. Read more →

xAI Faces Class Action Lawsuit Over Grok-Generated Child Exploitation Images

xAI Faces Class Action Lawsuit Over Grok-Generated Child Exploitation Images Engadget
Three teenagers from Tennessee have filed a class action lawsuit in California against xAI, alleging that the company’s AI model Grok used their photos to create sexualized images and videos of minors. The filing claims the generated content was shared on platforms such as Discord and Telegram, causing severe emotional distress and violating laws that prohibit child abuse material. xAI has not commented on the suit, while it continues to grapple with multiple investigations in the United States and Europe over similar allegations involving Grok’s image‑generation capabilities. Read more →

OpenAI’s safety team warns against rollout of ChatGPT adult mode

OpenAI’s safety team warns against rollout of ChatGPT adult mode Ars Technica2
Internal safety experts at OpenAI have publicly opposed the launch of a new “adult mode” for ChatGPT, questioning the company’s ability to keep minors from accessing explicit content. The dissent follows the departure of a senior safety executive who had opposed the feature, and a second former staff member who warned parents not to rely on OpenAI’s assurances. A recent bug that let minors see graphic erotica further fuels concerns, prompting OpenAI to pledge a monitoring plan while critics remain skeptical about its effectiveness. Read more →

Meta Ray‑Ban Smart Glasses Face Privacy Scrutiny Over AI Data Handling

Meta Ray‑Ban Smart Glasses Face Privacy Scrutiny Over AI Data Handling CNET
Meta's Ray‑Ban smart glasses, praised for their camera and audio capabilities, are drawing criticism for their privacy practices. When users invoke AI features, the company may send captured media to the cloud, where third‑party contractors could review it to improve services. Meta asserts that non‑AI photos and videos remain on the device unless users opt into cloud storage, but the definition of that storage and the safeguards around it remain vague. The lack of clear encryption and detailed guardrails has left users uneasy about the potential exposure of sensitive personal information. Read more →