Back

Experts Unveil Pro‑Human AI Declaration Amid Growing Government Tensions

Background and Motivation

A bipartisan coalition of thinkers, including physicist Max Tegmark, has produced the Pro‑Human Declaration, a comprehensive framework for responsible AI development. The declaration emerged after high‑profile tensions between the U.S. Defense Department and leading AI companies, notably the Pentagon’s designation of Anthropic as a “supply chain risk” and OpenAI’s withdrawal from a defense contract. These events underscored the lack of clear rules governing advanced artificial intelligence and the growing public concern about an unregulated race to superintelligence.

Key Provisions of the Declaration

The document outlines five pillars: keeping humans in charge, avoiding concentration of power, protecting the human experience, preserving individual liberty, and holding AI companies legally accountable. Among its most striking provisions are an outright prohibition on superintelligence development until a scientific consensus confirms safety and democratic approval, mandatory off‑switches for powerful systems, and a ban on self‑replicating or self‑improving architectures that could resist shutdown.

Focus on Child Safety

One of the declaration’s immediate priorities is the safety of younger users. It calls for mandatory pre‑deployment testing of AI products—particularly chatbots and companion apps targeting children—to assess risks such as increased suicidal ideation, mental‑health exacerbation, and emotional manipulation. Tegmark likens this to the FDA’s drug‑approval process, arguing that if existing laws already criminalize harmful behavior by humans, similar standards should apply to machines.

Broad Support Across the Political Spectrum

The declaration has attracted signatures from a diverse group of signatories, ranging from former Trump advisor Steve Bannon to former Obama National Security Advisor Susan Rice, as well as former Joint Chiefs Chairman Mike Mullen and progressive faith leaders. This cross‑partisan backing underscores a shared concern that humanity faces a fork in the road: either a future dominated by machines or one where AI amplifies human potential.

Implications for Policy and Industry

Advocates argue that the declaration’s principles could shape future legislation, urging Congress to act before AI systems become entrenched in critical infrastructure. The call for mandatory testing of children’s AI products could expand to broader safety requirements, including preventing AI from facilitating terrorist activities or undermining democratic institutions. By framing AI safety as a child‑protection issue, proponents hope to generate the public pressure needed to break the current policy impasse.

Looking Ahead

The Pro‑Human Declaration arrives at a pivotal moment, offering a concrete set of guidelines amid mounting uncertainty about AI’s trajectory. Its emphasis on human oversight, legal accountability, and safety testing aims to steer development toward outcomes that enhance, rather than replace, human capabilities. Whether policymakers will adopt these recommendations remains to be seen, but the declaration marks a significant step toward a more structured conversation about AI governance in the United States.

Used: News Factory APP - news discovery and automation - ChatGPT for Business

Source: TechCrunch

Also available in: