Back

AI-Generated 'Workslop' Erodes Trust and Quality in the Workplace

AI-Generated 'Workslop' Erodes Trust and Quality in the Workplace
TechRadar

Background

A joint study from Harvard Business Review and the Stanford Media Lab examined how generative AI tools are being used in everyday work tasks. The research highlighted that platforms such as Google’s Gemini, Microsoft’s Copilot, Anthropic’s Claude and OpenAI’s ChatGPT have become embedded in productivity applications, allowing users to generate summaries, reports, presentations, code and graphics with a few prompts.

Definition of "Workslop"

The study introduced the term "workslop" to describe AI‑generated work content that appears polished but lacks the depth and accuracy needed to meaningfully advance a task. It is presented as a cousin to the broader phenomenon of "AI slop," which refers to low‑quality AI output that floods the internet with poor art, writing and other media.

Prevalence and Impact

According to the findings, roughly 40% of surveyed workers reported receiving workslop in their organizations. Recipients described the material as confusing and, in some cases, offensive. The prevalence of workslop has begun to shape how employees view each other, with peers perceiving AI‑dependent coworkers as less capable, reliable, trustworthy, creative and intelligent.

Perceptions of AI‑Generated Work

Even when AI tools produce seemingly complete outputs, the study notes that they can still contain errors, hallucinations and generic phrasing. For instance, OpenAI’s GPT‑5 model was highlighted as an effort to curb hallucinations, yet the research stresses that no current system is perfect. The reliance on a limited set of buzzwords such as "delve," "pivotal" and "realm" contributes to a cookie‑cutter feel that undermines originality.

Recommendations

The authors propose several strategies to mitigate workslop. First, organizations should frame AI as a smart assistant rather than a substitute for human expertise. Second, clear guidelines must be established to ensure AI output undergoes editing, fact‑checking and personalization before distribution. Third, fostering in‑person meetings, direct collaboration and traditional brainstorming can help preserve the creative spark that AI lacks. By treating AI tools as supportive resources and not as replacements, companies can avoid the reputational risks associated with subpar AI‑generated deliverables.

Conclusion

The study underscores a growing tension between the efficiency gains offered by generative AI and the erosion of trust when AI‑generated work fails to meet quality standards. As AI continues to permeate workplace workflows, the balance between rapid automation and rigorous human oversight will determine whether organizations reap the benefits of these technologies or fall victim to the spread of workslop.

Used: News Factory APP - news discovery and automation - ChatGPT for Business

Source: TechRadar

Also available in: