What is new on Article Factory and latest in generative AI world

OpenAI Details Safeguards in New Pentagon AI Agreement

OpenAI Details Safeguards in New Pentagon AI Agreement TechCrunch
OpenAI announced a contract with the U.S. Department of Defense that it says protects three core red lines: mass domestic surveillance, autonomous weapons, and high‑stakes automated decisions. The company stresses a multi‑layered safety approach that includes full control over its safety stack, cloud‑based deployment, cleared personnel involvement, and strong contractual protections. OpenAI contrasts its stance with Anthropic, which failed to secure a similar deal, and emphasizes that its architecture prevents direct integration of models into weapon systems or sensors. Executives acknowledge the agreement was rushed and faced criticism, but argue it helps de‑escalate tensions between the defense sector and AI labs. Read more →

Trump Moves to Ban Anthropic from the US Government

Trump Moves to Ban Anthropic from the US Government Ars Technica2
A dispute between the Department of Defense and AI company Anthropic has intensified, with officials exchanging criticisms publicly. Defense Secretary Pete Hegseth met with Anthropic CEO Dario Amodei and gave the firm a deadline to revise its contract to permit “all lawful use” of its models. Experts suggest the conflict stems more from differing attitudes than concrete policy disagreements, noting that Anthropic has so far supported the Pentagon’s proposed uses. The company, founded on AI safety principles, has warned about the risks of fully autonomous weapons while acknowledging their potential defensive value. Read more →

Anthropic Rejects Pentagon Demand to Remove AI Guardrails

Anthropic Rejects Pentagon Demand to Remove AI Guardrails Engadget
Defense Secretary Pete Hegseth gave Anthropic a deadline of 5:01 PM on Friday to drop safety safeguards on its Claude AI system, threatening to cancel a $200 million contract and label the firm a supply‑chain risk. CEO Dario Amodei responded that Anthropic cannot in good conscience comply, insisting on keeping the safeguards while remaining willing to support the military. The Pentagon’s request would allow Claude to be used for mass surveillance and fully autonomous weapons, a use case Anthropic refuses. The standoff raises questions about AI safety, government contracts, and potential alternatives such as Grok, Google’s Gemini, and OpenAI. Read more →