The junk food work problem

AI can make you look productive without making you better. The firms that get this right draw a deliberate line between work they delegate and work they protect.

Illustration for The junk food work problem

There’s a pattern emerging in how businesses use AI that looks like productivity but isn’t.

Someone sets five AI agents running before their morning coffee. They come back to a stack of research summaries, draft documents, competitor analyses. It feels efficient. The output is there. The volume is impressive.

But something is missing. The person reviewing all this material didn’t do the research. Didn’t sit with the sources. Didn’t form their own view along the way. They’re consuming pre-digested conclusions instead of building understanding.

This is what Anthropic’s Jack Clark recently called “junk food work.” It looks good from the outside. It fills the plate. But it doesn’t build the thing you actually need.

The schlep work distinction

Not all delegation is equal. There’s a real difference between handing AI the work that wastes your time and handing it the work that builds your judgement.

Formatting a report? Scheduling follow-ups? Compiling data from three systems into one view? That’s schlep. It needs doing, nobody learns from it, and AI handles it well.

But the first draft of a strategy document. The research that shapes a recommendation. The analysis that forces you to confront what the data actually says. That’s learning work. And when you delegate learning work, you lose something that doesn’t show up on any dashboard.

Why this matters for businesses

Most people can sustain around two to four hours of genuinely creative, high-value work per day. The rest tends to be administrative overhead. AI is brilliant at clearing the overhead so you can spend more of those hours on the work that matters.

The risk is when AI starts eating into the creative hours too. When the research summary replaces the research. When the drafted recommendation replaces the thinking. When reviewing AI output becomes the job, rather than forming your own position first.

Over time, this can hollow out exactly the skills that make someone valuable. The intuitions that come from doing the work. The judgement that comes from wrestling with messy information. The taste that comes from producing something yourself, seeing what works, and refining it.

Drawing the line deliberately

The firms that tend to handle this well don’t leave it to individual choice. They make explicit decisions about which workflows are delegation-safe and which ones are learning-critical.

Delegation-safe: Anything repetitive, structured, or administrative. The output matters, the process of producing it doesn’t build anyone’s capability.

Learning-critical: Anything that develops judgement, deepens domain expertise, or requires the kind of understanding that only comes from direct engagement with the material.

The line isn’t always obvious. And it moves as people develop. But having the conversation at all is what separates teams that get sharper over time from teams that quietly lose the edge they started with. This connects to a broader question about who owns AI adoption inside your business — because someone needs to be making these calls.

The long view

AI is going to keep getting better at producing work that looks good. The question for every business isn’t whether to use it. It’s whether you’re using it in a way that makes your people better, or just makes them busier.

The firms that protect learning work — deliberately, not accidentally — will be the ones with the best judgement five years from now. And judgement, increasingly, is the thing that no system can replace.