Voltar

Sam Altman warns AI is making social media feel fake while promoting human‑verification device

Sam Altman warns AI is making social media feel fake while promoting human‑verification device
TechRadar

Altman’s Observation on AI‑Driven Social Media

Sam Altman, the chief executive of OpenAI, shared his perception that social networking sites have become increasingly artificial due to the proliferation of large language model outputs. He explained that the typical reading experience now feels “very fake,” as users struggle to distinguish between genuine human posts and those generated by artificial intelligence. Altman pointed out that the phenomenon is not limited to a few platforms; it spans multiple sites where AI‑crafted content now competes with authentic user contributions.

Factors Contributing to the “Fake” Feeling

According to Altman, several dynamics are at play. Real users have begun to adopt the linguistic quirks of AI, leading to a convergence of style that blurs the line between human and machine expression. Additionally, highly engaged online communities tend to reinforce each other’s behaviors, amplifying AI‑inspired patterns. The competitive pressure on platforms to maximize engagement further encourages the circulation of content that is optimized for clicks rather than authenticity.

Altman’s Role in Addressing the Issue

While highlighting the challenges, Altman also referenced his involvement with a hardware initiative designed to verify human presence on the internet. The device, marketed under the name Orb Mini, aims to confirm that a user is a real person before granting access to online services. Altman suggested that widespread adoption of such verification could help restore confidence in the authenticity of social interactions.

Potential Impact of Human Verification

If implemented broadly, the verification technology could serve as a countermeasure to the flood of AI‑generated posts that currently erode trust on social platforms. By requiring a physical proof of humanity, the system would make it more difficult for automated accounts to masquerade as real users, thereby reducing the prevalence of deceptive content.

Broader Implications

Altman’s comments reflect a tension between the rapid advancement of generative AI and the societal need for genuine communication. His dual focus—raising awareness of the problem while promoting a technological solution—underscores the complex role that AI leaders play in shaping both the capabilities of the technology and the policies that govern its use. The conversation points to an emerging debate about how to balance innovation with safeguards that preserve the integrity of online discourse.

Usado: News Factory APP - descoberta e automação de notícias - ChatGPT para Empresas

Source: TechRadar

Também disponível em: