What is new on Article Factory and latest in generative AI world

Anthropic Refutes Claims It Could Disrupt Military AI Systems

Anthropic Refutes Claims It Could Disrupt Military AI Systems Wired AI
The U.S. Department of Defense has expressed concern that Anthropic’s AI model, Claude, could be manipulated to interfere with military operations. Anthropic responded by stating it has no ability to shut down, alter, or otherwise control the model once deployed by the government. The company highlighted that it lacks any back‑door or remote kill switch and cannot access user prompts or data. In parallel, Anthropic has filed lawsuits challenging a supply‑chain risk designation that limits the Pentagon’s use of its software. The dispute underscores tension between national‑security priorities and emerging AI technologies. Read more →

Pentagon Declares Anthropic an Unacceptable Security Risk

Pentagon Declares Anthropic an Unacceptable Security Risk Engadget
The Department of Defense has argued that allowing Anthropic continued access to its warfighting infrastructure would introduce an unacceptable risk to supply chains and national security. In a court filing responding to Anthropic's lawsuit over a supply‑chain risk designation, the Pentagon cited concerns that the company could disable or alter its AI models during operations if corporate “red lines” were crossed. The filing notes that the agency’s secretary, Pete Hegseth, included a provision in AI contracts permitting use for any lawful purpose, which Anthropic refused, prompting the department to label the partnership unsafe. Read more →

Encyclopedia Britannica Sues OpenAI Over Alleged Copyright Infringement

Encyclopedia Britannica Sues OpenAI Over Alleged Copyright Infringement The Verge
Encyclopedia Britannica and Merriam-Webster have filed a lawsuit against OpenAI, claiming the AI company used their copyrighted material to train its models and then generated responses that closely mirror their content. The complaint alleges that GPT‑4 "memorized" large portions of Britannica’s text and can reproduce near‑verbatim excerpts on demand, diverting traffic from the publishers’ sites. The case adds to a growing wave of legal actions by publishers seeking accountability for AI training practices, joining lawsuits from The New York Times and a settlement involving Anthropic. Read more →