Ars Technica2 Industry leaders and scholars warn that without clear standards, AI safety testing could become a political tool. Microsoft, the National Institute of Standards and Technology (NIST) and the Center for AI Safety Initiative (CAISI) plan to develop testing methods on the fly, but critics argue that only an independent audit system can prevent government overreach and ensure accountability. Cornell professor Gregory Falco proposes a rigorously enforced audit regime akin to the IRS, urging firms to adopt internal safety checks before deployment.
Read more →