AI generated ‘workslop’ is destroying productivity, say researchers

Workslop is a new term for the flood of low quality content and outputs produced by employees leaning too heavily on generative AIA new analysis published in the Harvard Business Review suggests that the rapid adoption of artificial intelligence tools is undermining productivity in many workplaces rather than improving it. The article introduces the idea of “workslop”, a term used to describe the flood of low quality content and outputs produced by employees leaning too heavily on generative AI systems. According to the research, the number of organisations adopting AI has risen sharply in the past two years, with many leaders assuming the technology would deliver immediate efficiency gains. Yet a study from MIT Media Lab cited in the article found that 95 percent of organisations have seen no measurable return on their investment in AI. Instead of freeing people from routine tasks, many firms are finding that AI is creating new layers of work. Drafts, reports and proposals generated by software are often superficial, incoherent or riddled with errors, requiring staff to spend additional time correcting or recreating them.

The authors, consisting of researchers from Stanford and the firm BetterUp Labs, argue that this misplaced confidence in AI output has several consequences. Managers often encourage speed over accuracy, meaning employees are incentivised to produce a high volume of content regardless of quality. At the same time, the effort involved in editing or discarding poor outputs is largely invisible in official metrics, creating an illusion of progress. The result is a culture of busy work that undermines the very productivity improvements AI was meant to provide.

For organisations, the risks extend beyond wasted time. The article warns that morale suffers when employees are tasked with constant clean up work, while innovation is stifled as teams devote energy to reworking flawed material. The danger is that companies confuse activity with achievement, measuring how many documents or presentations are produced rather than whether they achieve their intended impact.

The authors conclude that firms need to treat AI as a tool rather than a replacement for human judgement. They recommend establishing clearer processes for reviewing AI generated content, investing in training so that employees know how to prompt and edit effectively, and setting targets that measure outcomes rather than volume. Without these safeguards, the promise of artificial intelligence may be lost in a tide of workslop that leaves people busier but no more productive.