Google’s Threat Tracker report reveals that hackers are conducting "distillation attacks" by flooding the Gemini AI model with more than 100,000 prompts to steal its underlying technology. The attempts appear to originate from actors in North Korea, Russia and China and are classified as model extraction attacks, where adversaries probe a mature machine‑learning system to replicate its capabilities. While Google says the activity does not threaten end users directly, it poses a serious risk to service providers and AI developers whose models could be copied and repurposed. The report highlights a growing wave of AI‑focused theft and underscores the need for stronger defenses in the rapidly evolving AI landscape.
Read more →