Back

Researchers Reveal AI Model Theft via Electromagnetic Side‑Channel

New Physical‑Layer Threat to AI Models

A research team headed by scientists at KAIST has uncovered a novel way to steal artificial‑intelligence (AI) models without breaching a computer system. The method relies on capturing the tiny electromagnetic signals that GPUs emit while processing AI workloads. By analyzing these emissions, the team was able to infer the internal structure of the model, including layer configurations and parameter choices.

How ModelSpy Works

The researchers built a device they named ModelSpy, which consists of a small antenna that can be concealed inside a bag. The antenna picks up faint electromagnetic traces produced by the GPU as it performs calculations. These traces are subtle but follow patterns that correspond to the architecture of the neural network being run. The team collected data from multiple GPU types and demonstrated that the antenna could operate from as far as six meters away, even through walls.

Accuracy and Scope of Extraction

By processing the captured signals, the researchers were able to reconstruct key details of the AI model’s design. Tests showed that core structures could be identified with up to 97.6 percent accuracy. The approach does not require any physical contact with the target system, nor does it depend on traditional software exploits or network access. Instead, it treats the computation itself as a side channel that inadvertently reveals sensitive information.

Implications for Industry

The findings raise immediate security concerns for organizations that rely on AI models as proprietary assets. Many companies consider the architecture of their models to be core intellectual property, and the ability to extract this information remotely could represent a direct business risk. Existing defenses that focus on software hardening or network segmentation may be insufficient because the vulnerability originates from hardware emissions.

Potential Countermeasures

The authors of the study also suggested ways to mitigate the risk. Adding electromagnetic noise to the environment and adjusting how computations are scheduled can make the emitted patterns harder to interpret. These recommendations point to a broader shift in AI security, where hardware‑level adjustments become as important as software updates.

Recognition and Future Outlook

The research was presented at the NDSS Symposium, signaling that the security community takes the threat seriously. As AI systems become more widespread, the possibility of side‑channel attacks like ModelSpy may grow, emphasizing the need for comprehensive protection strategies that address both digital and physical aspects of computation.

Used: News Factory APP - news discovery and automation - ChatGPT for Business

Source: Digital Trends

Also available in: