Musk’s lawsuit challenges OpenAI’s shift from safety‑focused research to profit‑driven AI
Elon Musk’s lawsuit against OpenAI entered a federal courtroom in Oakland on Thursday, accusing the artificial‑intelligence lab of betraying its founding promise to prioritize humanity’s safety as it pursues artificial general intelligence. The suit hinges on whether the company’s for‑profit subsidiary has steered OpenAI away from a research‑first culture and toward a product‑centric model that could jeopardize safety safeguards.
Rosie Campbell, who joined OpenAI’s AGI readiness team in 2021 and left in 2024, took the stand to describe the internal changes she witnessed. Campbell said the lab’s early days were dominated by discussions of AGI and safety, but over time “it became more like a product‑focused organization.” She pointed to the disbanding of two safety‑oriented groups: the AGI readiness team she led and the Super Alignment team, both dissolved as the company shifted resources toward commercial deployments.
One incident underscored the tension. Microsoft rolled out a version of OpenAI’s GPT‑4 model through its Bing search engine in India before the model received approval from OpenAI’s Deployment Safety Board (DSB). Campbell testified that, while the model did not pose a “huge risk,” the premature launch set a dangerous precedent. “We want to have good safety processes in place we know are being followed reliably,” she said, emphasizing the need for strong standards as AI capabilities grow.
During cross‑examination, Campbell acknowledged that substantial funding is essential for achieving AGI, yet she warned that “creating a super‑intelligent computer model without the right safety measures … wouldn’t fit with the mission of the organization she originally joined.” Her comments align with OpenAI’s own public safety framework, though the company declined to comment on its current alignment strategy.
The courtroom also heard testimony from former board member Tasha McCauley, who described a pattern of limited transparency from CEO Sam Altman. McCauley recounted that Altman failed to inform the board about the decision to launch ChatGPT publicly and concealed potential conflicts of interest. She noted that the board’s non‑profit mandate to oversee the for‑profit arm was “called into question,” eroding confidence in the information provided to directors.
Altman’s brief ouster in 2023, triggered by the same India deployment controversy, resurfaced in the hearing. At that time, board members—including then‑chief scientist Ilya Sutskever and CTO Mira Murati—expressed concerns about Altman’s management style and disclosure practices. The board’s reversal, after staff rallied behind Altman and Microsoft intervened, left dissenting members to step down, further highlighting governance fractures.
David Schizer, a former dean of Columbia Law School hired by Musk’s legal team as an expert witness, reinforced the safety argument. He said OpenAI’s mission “emphasizes that a key part of its mission is safety and they are going to prioritize safety over profits.” Schizer stressed that any safety rule requiring review must be enforced consistently, suggesting that the current process falls short.
Beyond OpenAI, the case raises broader policy questions. McCauley argued that internal governance failures at a leading AI lab should prompt stronger government oversight of advanced AI systems, warning that “if it all comes down to one CEO making those decisions, and we have the public good at stake, that’s very suboptimal.”
Musk’s lawsuit, therefore, does more than target a single company; it challenges the prevailing balance between rapid commercialization and the ethical safeguards that many believe should govern the development of transformative AI technologies.
Used: News Factory APP - news discovery and automation - ChatGPT for Business