What is new on Article Factory and latest in generative AI world

Experts Call for Independent Audits as AI Safety Standards Remain Undefined

Experts Call for Independent Audits as AI Safety Standards Remain Undefined Ars Technica2
Industry leaders and scholars warn that without clear standards, AI safety testing could become a political tool. Microsoft, the National Institute of Standards and Technology (NIST) and the Center for AI Safety Initiative (CAISI) plan to develop testing methods on the fly, but critics argue that only an independent audit system can prevent government overreach and ensure accountability. Cornell professor Gregory Falco proposes a rigorously enforced audit regime akin to the IRS, urging firms to adopt internal safety checks before deployment. Read more →

Five Frontier AI Labs Agree to Voluntary Pre‑Release Model Reviews by U.S. Government

Five Frontier AI Labs Agree to Voluntary Pre‑Release Model Reviews by U.S. Government The Next Web
Google, Microsoft, xAI, OpenAI and Anthropic have signed on to give the U.S. Commerce Department’s Center for AI Standards and Innovation pre‑release access to their newest models. The voluntary arrangement, built by a staff of fewer than 200, provides the closest approximation the United States has to an AI oversight system, though it carries no statutory authority and cannot block deployments. The expansion follows the so‑called Mythos crisis, which highlighted the need for early government scrutiny of powerful AI capabilities. Read more →