Back

Americans Struggle to Identify AI-Generated Content on Social Media, Survey Finds

Survey Overview

A CNET‑commissioned study that surveyed U.S. adults who use social media found that an overwhelming majority—94%—believe they encounter content that was created or altered by artificial intelligence. Despite this high exposure, confidence in distinguishing authentic images and videos from AI‑generated ones is low, with only 44% of respondents saying they feel sure they can spot the difference.

Public Confidence Across Generations

Confidence varies by age group. Older users, including Boomers and Gen X, are the least certain, with 40% and 28% respectively feeling able to identify AI‑generated media. Younger users, especially Gen Z, show higher confidence but still fall short of a majority.

Verification Practices

When faced with potentially AI‑generated content, 72% of respondents report taking some form of action to verify its authenticity. The most common method—used by 60% of respondents—is a close visual inspection for cues or artifacts. Other tactics include checking for labels or disclosures (30%) and searching for the content elsewhere online, such as through reverse‑image searches (25%). Only 5% have used a dedicated deep‑fake detection tool.

However, a notable portion of respondents—25%—do nothing to verify content, with inaction highest among Boomers (36%) and Gen X (29%).

Desire for Better Labeling

Half of the surveyed adults (51%) say the internet needs better labeling of AI‑generated and edited content. Support for stronger labeling is strongest among Millennials (56%) and Gen Z (55%). The rationale is that clear disclosures could help users make more informed decisions about what they see.

Opinions on Regulation and Bans

When asked about policy approaches, 21% of respondents believe AI‑generated content should be prohibited on social media altogether, with the highest support among Gen Z (25%). Conversely, 36% favor allowing AI content but with strict regulation. Only a small minority (11%) find AI‑generated media useful, informative, or entertaining.

Current Platform Responses

Major social platforms currently permit AI‑generated content as long as it does not violate existing content guidelines. Some, like Pinterest, have introduced filters to limit AI content in users’ feeds, while others, such as TikTok, are still testing similar tools. Users can also mute or filter AI‑driven features on devices and applications, including Meta AI on Instagram and Facebook, Apple Intelligence, and Google’s Gemini suite.

Practical Tips for Users

The survey’s authors recommend a multi‑layered approach: remain vigilant for visual oddities, check for any disclosed labels, and use reputable verification tools like the Content Authenticity Initiative’s detector. They also suggest reviewing the source account for red flags, such as a lack of genuine followers or a history of posting dubious content.

Implications

The findings underscore a widening gap between the rapid advancement of AI‑generated media and the public’s ability to critically assess it. While many users are taking steps to verify content, a substantial share—particularly older adults—remain vulnerable. The call for better labeling reflects a growing demand for clearer standards that could help bridge this confidence gap.

Used: News Factory APP - news discovery and automation - ChatGPT for Business

Source: CNET