Back

Meta launches AI bone‑structure scans to flag under‑13 accounts on Facebook and Instagram

Meta disclosed on Tuesday that its platforms will start using artificial intelligence to scan images and videos for physical traits that suggest a user is younger than 13. The AI looks for "general themes and visual cues," including height and bone structure, to flag potential child accounts. Unlike facial‑recognition tools, the system does not try to match a face to an individual; it simply assesses whether the subject appears to be a minor.

When the algorithm flags an account, Meta will deactivate it and prompt the owner to verify their age before the profile can be restored. The move is part of the company’s broader effort to keep children under the age of 13 off its services, a requirement of the Children’s Online Privacy Protection Act (COPPA). In addition to visual analysis, the AI will scan posts, comments, bios and captions for contextual clues that suggest a user may be underage.

Meta is also extending a related technology that automatically places users aged 13 to 17 into "Teen Accounts." These accounts impose stricter content controls, block messages from strangers and prevent users under 16 from livestreaming. Instagram already uses this system; Facebook will follow suit in the United States, with a rollout planned for the United Kingdom and the European Union in June.

The new AI tools are being introduced in a handful of countries, including the United States, ahead of a wider global deployment. Meta says the feature is available only in "select" markets at this stage.

The announcement comes on the heels of a New Mexico jury verdict that found Meta had violated state law by misleading customers about platform safety and failing to protect children from predators. The court ordered the company to pay $375 million and signaled that additional changes may be required, a prospect Meta has previously warned could lead it to withdraw from the state.

In its blog post, Meta reiterated its support for age‑verification mechanisms embedded at the app‑store and operating‑system levels, an approach gaining traction among lawmakers in Congress and in states such as California and Colorado. The company argues that system‑wide verification would provide a more reliable barrier against underage access than platform‑specific measures alone.

Critics have pointed out that scanning visual content for bone structure raises privacy concerns, even though Meta emphasizes that the technology does not identify individuals. The firm maintains that the AI serves a protective function, aiming to reduce the exposure of children to harmful content and interactions.

As the rollout proceeds, Meta will monitor the effectiveness of the AI in identifying under‑13 users and adjust its policies accordingly. The company’s next steps include expanding the technology to additional regions and refining the teen‑account safeguards to comply with evolving regulatory expectations.

Used: News Factory APP - news discovery and automation - ChatGPT for Business

Source: The Verge

Also available in: