Back

California Enacts AI Transparency Law, Critics Say It Favours Big Tech

California Enacts AI Transparency Law, Critics Say It Favours Big Tech
Ars Technica2

Background and Legislative Intent

Governor Gavin Newsom signed the Transparency in Frontier Artificial Intelligence Act, a state law aimed at increasing openness around AI development. The legislation targets companies with annual revenues of at least $500 million, obligating them to post safety protocols on their websites and to report potential critical safety incidents to California’s Office of Emergency Services.

Key Provisions

The law emphasizes disclosure rather than mandated testing. Companies must describe how they incorporate national, international, and industry‑consensus standards into their AI work, though the statute does not specify which standards apply or require independent verification. It also establishes whistleblower protections for employees who raise safety concerns.

Critical incidents are narrowly defined as events that could cause 50 or more deaths, result in $1 billion in damage, or involve weaponized AI, autonomous criminal acts, or loss of control. Violations of reporting requirements may incur civil penalties of up to $1 million per infraction.

Comparison to Earlier Proposals

The new act replaces an earlier bill, S.B. 1047, which would have mandated safety testing and “kill switches” for AI systems. The earlier proposal was vetoed after intense lobbying by technology firms. By shifting focus to reporting and transparency, the current law reflects a compromise that many critics say aligns with the preferences of large AI companies.

Industry Reaction and Criticism

Industry observers contend that the law’s reliance on voluntary disclosures lacks the enforcement teeth needed to ensure real safety safeguards. They point to the omission of mandatory testing and independent audits as a concession to Big Tech, which had lobbied heavily against stricter measures.

Potential Impact

California houses a substantial portion of the world’s leading AI firms, and its regulatory moves are likely to influence broader national and international discussions on AI governance. While the law introduces reporting mechanisms and penalties, its effectiveness will depend on how rigorously companies comply and how state authorities enforce the provisions.

Used: News Factory APP - news discovery and automation - ChatGPT for Business

Source: Ars Technica2

Also available in: