Voltar

Scammers Exploit Google AI Overviews with Fake Phone Numbers

Scammers Exploit Google AI Overviews with Fake Phone Numbers
Wired AI

How the Scam Works

Google’s AI Overviews replace traditional link lists with concise, AI‑generated summaries drawn from publicly available web content. Scammers have taken advantage of this process by publishing false contact numbers alongside reputable brand names on low‑profile websites. When the AI scrapes these pages, it includes the bogus numbers in its answer boxes. A user searching for a company’s phone number sees the fraudulent contact, calls it, and is connected to an impersonator who attempts to extract payment details or other sensitive information.

Evidence of the Threat

Reports from major publications and social‑media posts have documented instances where fake support numbers appeared in Google’s AI Overviews. Credit unions, banks, and consumer‑protection agencies have warned customers about the danger, noting that the scams are not new but are amplified by the AI‑driven presentation that makes the information appear authoritative.

Google’s Response

Google acknowledges the problem and says it is actively improving its spam‑detection systems. The company claims its anti‑spam protections are “highly effective” at keeping scams out of AI Overviews and that it continues to roll out updates to better verify contact information. However, the company also notes that there is currently no way for users to disable AI Overviews entirely.

Practical Safety Measures

Experts recommend a simple verification step: after seeing a phone number in an AI Overview, perform a separate search for the company’s official website and locate the contact details there. This extra click helps ensure the number is legitimate. Users should also be wary of providing payment or personal data over the phone unless they have confirmed the caller’s identity through an official channel.

Broader Implications

The incident underscores a larger challenge with generative AI in search: the technology can surface outdated, inaccurate, or malicious information without clear cues to the user. As AI summaries become more common, the responsibility for fact‑checking shifts increasingly onto the end user. While AI can streamline information retrieval, critical queries—especially those involving financial transactions or personal data—still benefit from traditional verification methods.

Usado: News Factory APP - descoberta e automação de notícias - ChatGPT para Empresas

Source: Wired AI

Também disponível em: