


As the 3I/Atlas interstellar space object whipped through our solar system, I observed hundreds of YouTube videos, narrated with the same British-accented monotone voice, playing up the possibility that there was alien technology behind it. In each video, the narrator was making the same points over and over.
These are examples of AI slop that has invaded many corners of the internet - and there’s a cost to their proliferation. “Lately, I’ve seen the explosion of AI slop on YouTube: low-effort, mass-produced videos flooding the platform, crowding out authentic, creative voices,” according to David Linthicum, a highly regarded enterprise technology influencer. “AI-generated content, when used to maximize clicks rather than value, is jeopardizing the entire ecosystem. Viewers are bombarded with shallow, repetitive, and sometimes misleading videos. Meanwhile, genuine creators—those putting in real thought, research, and personality—are struggling to compete.”
The same is happening at the enterprise level. In this case, AI tools are being used to produce “workslop”— “content that appears polished but lacks real substance,” according to a report published in Harvard Business Review. Workslop has become a liability for AI implementations, according to the team of authors, led by Kate Niederhoffer, VP of BetterUp Labs and a social psychologist.
This may be hobbling efforts to produce value from genAI, as well aa crowding out productivity, the researchers state. It’s no small wonder, then, that apparently only five percent of genAI efforts are considered to be delivering value, as a study out of MIT suggested.
Niederhoffer’s team collaborated with Stanford Social Media Lab to conduct a survey of 1,150 full-time employees finds many “are using AI tools to create low-effort, passable looking work that ends up creating more work for their coworkers.” They define workslop “as AI generated work content that masquerades as good work, but lacks the substance to meaningfully advance a given task.”
People can rapidly generate reports, slides, and computer code, with simple gen AI prompts, which may be a temptation to overworked or stress-out employees. “But while some employees are using this ability to polish good work, others use it to create content that is actually unhelpful, incomplete, or missing crucial context about the project at hand.” As a result, workslop “shifts the burden of the work downstream, requiring the receiver to interpret, correct, or redo the work.” Niederhoffer and her co-authors call this being “workslopped.”
At least 40% of employees in the survey report having received workslop in the last month. An average of 15% of the content they receive at work qualifies, they calculate. The phenomenon occurs mostly between peers (40%), but workslop is also sent to managers by direct reports (18%).
Essentially, any productivity gains achieved through gen AI get washed out by workslop. “When coworkers receive workslop, they are often required to take on the burden of decoding the content, inferring missed or false context," the co-authors said. The cost runs high – employees in the survey “reported spending an average of one hour and 56 minutes dealing with each instance of workslop. These workslop incidents carry an invisible tax of $186 per month.” This could add up to millions of dollars in large organizations that employ thousands of employees.
There is an interpersonal cost and a loss of trust as well. “Approximately half of the people we surveyed viewed colleagues who sent workslop as less creative, capable, and reliable than they did before receiving the output," the co-authors report. Forty-two percent saw them as “less trustworthy,” and 37% saw that colleague as “less intelligent.”
Niederhoffer and her co-authors offer the following three points of advice to discourage workslop creation:
Offer guidance and rules of the road for GenAI usage. Employees need to understand the limits of AI usagev-- and that they cannot use AI freely without accountability. "It’s easy to see how this translates into employees thoughtlessly copying and pasting AI responses into documents, even when AI isn’t suited to the job at hand.”
Recognize that mindsets matter. It’s important to involve people with the AI design and development process. BetterUp’s work finds workers more engaged with the AI process “are much more likely to use AI to enhance their own creativity” than their less AI-engaged counterparts. These less-engaged employees are much more likely to use AI “in order to avoid doing work” than their more engaged employees.
Emphasize collaboration. Working closely with colleagues on projects requires responsible use of AI tools. “Today’s work requires more and more collaboration, not only with humans but also, now, with AI," said Niedeherhoffer. "Collaboration in 2025 must include the ways we incorporate AI work products into our common workflows, in service of shared outcomes, rather than as a vehicle for subversively dodging responsibility.”