Google Removes Gemma Model from AI Studio After Senator Accuses It of Defamation
Background
U.S. Senator Marsha Blackburn sent a letter to Google chief executive Sundar Pichai alleging that the company’s Gemma model, accessible through the AI Studio development environment, produced false statements about her personal conduct. In the letter, Blackburn asserted that when asked about accusations of rape, the model fabricated a narrative involving a state trooper and alleged non‑consensual acts, which she described as entirely untrue. The senator also referenced a separate lawsuit filed by conservative activist Robby Starbuck, who claims Google’s AI systems have generated defamatory claims about him.
Blackburn framed the model’s output as more than a typical "hallucination"—a term commonly used to describe AI‑generated inaccuracies—arguing that the false statements constitute defamation that was distributed by a Google‑owned system. She linked the incident to broader concerns about perceived bias against conservative figures in AI technologies.
Google’s Response
Google acknowledged the issue, noting that Gemma was designed as a lightweight, open model for developers to integrate into their own applications, not as a consumer‑facing chatbot. The company explained that reports of non‑developers using AI Studio to ask factual questions prompted the decision to remove Gemma from the platform. Google emphasized that the model would remain accessible via its application programming interface (API) for legitimate development purposes.
In response to the senator’s allegations, Google’s vice president for government affairs and public policy reiterated that hallucinations are a known challenge in large language models and that the company is actively working to mitigate such errors. The firm clarified that it never intended the model to be used as a public question‑answer tool, reinforcing its commitment to responsible deployment of AI technologies.
The removal of Gemma from AI Studio underscores the tension between rapid AI innovation and the demand for accountability, especially when political figures claim that AI outputs have caused reputational harm. The episode adds to ongoing debates about how technology companies should address erroneous or potentially defamatory content generated by their models, and how regulatory bodies might oversee such issues.
Used: News Factory APP - news discovery and automation - ChatGPT for Business