Atrás

AI-Generated Content Overwhelms Social Media, Raising Authenticity and Trust Concerns

AI-Generated Content Overwhelms Social Media, Raising Authenticity and Trust Concerns
CNET

Proliferation of AI‑Generated Media

Social media networks have long been critiqued for presenting overly polished, aspirational content. Recently, the problem has deepened as generative artificial‑intelligence tools—most notably OpenAI’s Sora, Google’s Veo and the image‑to‑video platform Midjourney—enable users to produce lifelike videos and images from simple text prompts. This capability has led to an influx of low‑quality, AI‑generated posts, a phenomenon dubbed “AI slop.” The term captures a deluge of content that ranges from whimsical animal videos to AI‑crafted vacation photos and entirely fabricated influencer personas.

Deepfakes and the Erosion of Trust

Alongside AI slop, deepfake technology has made it easier to create convincing videos of public figures saying or doing things that never occurred. The spread of such fabricated material challenges users’ ability to discern fact from fiction. According to a study conducted by the media firm Raptive, when people suspect that a piece of content is AI‑generated, they are less likely to trust it and feel a weaker emotional connection. The study found that nearly half of respondents viewed AI‑created content as less trustworthy.

Expert Perspectives on Platform Shifts

Alexios Mantzarlis, director of Cornell Tech’s Security, Trust and Safety Initiative, observes that social platforms appear to prioritize keeping users engaged with the technology itself rather than fostering genuine human connections. He notes that tech companies are showcasing AI capabilities to boost stock performance, often at the expense of user experience. Mantzarlis warns that AI‑driven content may amplify unrealistic body standards and further distort reality for viewers.

Industry Response and Regulatory Gaps

In response to mounting concerns, major platforms have pledged to label AI‑generated material and to prohibit harmful posts, such as those that misuse private individuals’ likenesses. However, the pace of regulation lags behind rapid AI advancements, leaving platforms to police content largely on their own. Paul Bannister, chief strategy officer at Raptive, highlights that while AI tools democratize creation by expanding the pool of potential creators, the sheer volume of generated media poses significant moderation challenges.

Potential Paths Forward

Stakeholders suggest giving users more control over the amount of AI content they encounter, citing Pinterest’s optional AI‑content filters as a possible model. Such measures could help restore a degree of authenticity to feeds and mitigate the spread of misinformation. Until clearer regulatory frameworks emerge, the balance between fostering creative AI use and preserving trust on social media remains a contested frontier.

Usado: News Factory APP - descubrimiento de noticias y automatización - ChatGPT para Empresas

Source: CNET

También disponible en: