Back

Meta Deploys AI to Identify and Remove Under‑13 Users from Facebook, Instagram

Meta disclosed a suite of artificial‑intelligence tools aimed at keeping children under the age of 13 off its flagship platforms, Facebook and Instagram. The company’s blog post details how the new system blends textual analysis with visual scanning to flag under‑age accounts more reliably than before.

On the textual side, the AI looks for contextual hints in user‑generated content. Mentions of a school grade, birthday celebrations, or other age‑related language in profiles, posts, and captions trigger a closer review. Simultaneously, a visual‑analysis engine examines photos and videos for physical indicators such as height and bone structure. Meta stresses that the process is not facial recognition; the algorithm estimates a general age range without identifying a specific individual.

When the system suspects a user is under 13, the account is automatically deactivated. The user must then provide proof of age—such as a government‑issued ID—to regain access. If verification does not occur, Meta wipes the account entirely. This dual‑step approach aims to reduce the number of under‑age accounts that slip through manual checks.

The visual‑analysis feature is currently live in a handful of countries, with Meta saying it will broaden the rollout as the technology matures. In parallel, the company is extending its AI‑driven age detection to the 13‑ to 15‑year‑old bracket. Detected teens will be shifted into dedicated teen accounts that include parental controls and additional safety features. The pilot for this teen‑account system launches on Instagram in Brazil and across 27 European Union member states.

Facebook will receive the teen‑account upgrade next, starting in the United States before expanding to the EU and the United Kingdom in the coming month. WhatsApp, meanwhile, has introduced parent‑managed accounts that let children under 13 use the messaging app under adult supervision.

Meta’s moves come amid mounting regulatory pressure. The European Commission recently released preliminary findings from an investigation into Facebook and Instagram, suggesting the platforms may be violating the Digital Services Act by failing to adequately prevent under‑age participation. Meta now has an opportunity to review the Commission’s findings and implement corrective measures.

Industry observers note that Meta’s reliance on AI marks a shift from purely manual moderation toward automated, scalable solutions. By combining textual cues with visual age estimation, the company hopes to close gaps that previously allowed under‑age users to slip through verification processes. Critics, however, caution that algorithmic judgments can produce false positives, potentially wiping legitimate accounts.

Regardless of the debate, the rollout signals Meta’s intent to align its platforms with global child‑protection standards while navigating a complex regulatory landscape. The next few months will reveal how effectively the AI tools perform at scale and whether they satisfy the demands of regulators and privacy advocates alike.

Used: News Factory APP - news discovery and automation - ChatGPT for Business

Source: Engadget

Also available in: