Hundreds of Prominent Figures Call for a Ban on AI Superintelligence Development
Widespread Call for a Moratorium on Superintelligent AI
More than 700 prominent public figures have signed a statement urging a prohibition on the development of artificial superintelligence until robust safety measures and public consensus are in place. The signatories span a range of backgrounds, from leading AI researchers—often referred to as the "godfathers of AI"—to former policymakers and well‑known entertainers. Their collective message emphasizes that the creation of AI systems capable of outperforming humans across nearly all cognitive tasks, especially without adequate oversight, poses serious risks.
The petition highlights several core concerns: potential loss of individual freedoms, heightened national security threats, and the existential danger of human extinction. These worries echo earlier warnings from technology leaders such as Elon Musk, who has previously likened the rush toward advanced AI to "summoning a demon." The signatories argue that without clear, enforceable safeguards, the rapid pace of AI development could outstrip humanity’s ability to control it.
Public Opinion Mirrors the Call for Regulation
A recent national poll conducted by the Future of Life Institute reveals that public sentiment aligns closely with the petition’s stance. Only a small fraction—5%—of respondents support the current fast‑track, unregulated approach to AI advancement. In contrast, a substantial majority—64%—believe that superintelligent AI should not be pursued until its safety can be assured, and 73% demand robust regulatory frameworks to govern advanced AI technologies.
These figures underscore a growing demand for transparency and oversight in the AI sector, suggesting that both experts and the broader public are wary of unchecked progress.
Growing Momentum and Ongoing Signature Drive
The petition’s momentum continues to build, with the current signature count reported at 27,700. This expanding list of supporters reflects a rising collective anxiety about the trajectory of AI research and a desire for deliberate, cautious advancement. The signatories’ call to pause superintelligence development aims to foster a more measured approach, ensuring that future AI systems can be integrated safely and responsibly into society.
In summary, the coalition of scientists, policymakers, and cultural figures is urging a temporary halt to the pursuit of AI superintelligence until comprehensive safety protocols and broad public agreement are secured. Their appeal is bolstered by public polling that reveals widespread concern over the rapid, unregulated evolution of AI, highlighting the urgent need for stronger governance and oversight.
Usado: News Factory APP - descubrimiento de noticias y automatización - ChatGPT para Empresas