Google Warns of Large-Scale AI Model Extraction Attacks Targeting Gemini
Google’s Threat Tracker Highlights AI Model Theft
In a newly released Threat Tracker report, Google disclosed that hackers are executing large‑scale "distillation attacks" aimed at its Gemini artificial‑intelligence model. The report details a specific incident in which more than 100,000 AI prompts were used to probe Gemini’s capabilities, with the intent of extracting the model’s underlying knowledge and reproducing it in a separate system.
The attackers appear to be operating from several nations, including North Korea, Russia and China. Google classifies these activities as model extraction attacks, a technique where an adversary leverages legitimate access to a mature machine‑learning model, systematically querying it to harvest information that can be used to train a new model. This approach effectively allows the thieves to clone the original model’s performance without directly compromising user data.
According to the report, the primary target of these attacks is not the end user but rather the broader ecosystem of service providers and AI developers. Google emphasizes that while the current activity does not pose an immediate threat to its customers, it does raise significant concerns for companies that invest heavily in building and fine‑tuning large language models. The theft of intellectual property could enable competitors to launch near‑identical offerings at a fraction of the development cost.
John Hultquist, chief analyst for Google’s Threat Intelligence Group, described the situation as a “canary in the coal mine,” suggesting that Google may be among the first major firms to encounter this type of AI theft, but that many more incidents are likely to follow. The report situates these attacks within a broader pattern of AI‑related cyber threats, noting that the war over advanced models has intensified across multiple fronts.
Recent developments in the AI field underscore the competitive pressure driving such theft. Chinese companies, including ByteDance, have introduced sophisticated video‑generation tools, while DeepSeek, another Chinese AI firm, released a model that rivaled leading U.S. technologies. OpenAI has previously accused DeepSeek of training its model on existing technology in a manner similar to the tactics described in Google’s report.
Google’s findings serve as a warning to the AI community about the emerging risk of model extraction. The company urges developers to implement robust monitoring and access controls, and to consider defensive measures such as watermarking model outputs and limiting the volume of queries from any single source. As AI systems become more integral to a wide range of applications, protecting the intellectual property that underpins them will be essential to maintaining a competitive and secure technology landscape.
Used: News Factory APP - news discovery and automation - ChatGPT for Business