Waste at the Speed of AI

Ruben Hassid, who writes the "How to AI" newsletter on Substack, made an observation recently that stopped me cold. He calls it "performative productivity" - that feeling at the end of a busy AI-assisted day where you've shipped a new landing page, rewrote the sales deck, drafted a content calendar, and fixed the email sequence, yet somehow can't name one thing that actually moved you forward. The problem, as he puts it, is that AI lets you skip the hardest part of work: deciding what's worth doing in the first place.

That landed for me because it's a problem we dealt with constantly in manufacturing, and we have a name for it.

In lean thinking, there are seven deadly wastes - muda, in Toyota's terminology. Overproduction, waiting, unnecessary transport, over-processing, excess inventory, unnecessary motion, and defects. The first one, overproduction, is considered the worst because it quietly enables all the others. You're not just wasting resources on the thing you made, you're creating downstream waste to store it, move it, track it, and eventually deal with it when nobody needs it.

AI is starting to feel like a very fast conveyor belt that doesn't ask where the belt is going.

The classic lean metaphor is automating a broken process. If you have a defective operation and you automate it, you don't fix the process - you just produce defects and unnecessary work faster. The same logic applies here. If the analysis wasn't worth doing, or the content wasn't going to move the needle, or the strategy deck was going to sit unread in someone's Google Drive, then doing it faster with AI doesn't help. You've just accelerated the production of waste. The velocity of output goes up; the value doesn't.

There's an economic concept that fits here too, one Hassid has also referenced: the Jevons Paradox. William Stanley Jevons observed in 1865 that more efficient steam engines didn't reduce coal consumption - they increased it, because efficiency made coal-powered processes cheaper, so people used more of them. The same dynamic is playing out with AI. Cheaper cognition means more projects, more experiments, more long-tail ideas that were previously too expensive to pursue. Some of that is genuinely valuable. A lot of it is Jevons in action - more consumption of something just because the marginal cost dropped.

The lean antidote to overproduction isn't speed control - it's the discipline of asking "should we be making this at all?" Toyota's system uses a concept called "pull," where downstream demand triggers upstream production. You don't make something until there's a genuine need for it. The discipline of pull is what keeps the conveyor belt from running away from you.

Applied to AI-assisted knowledge work, pull might mean asking a few questions before firing up the prompt: What decision does this output support? Who actually needs it and by when? What happens if we don't do it? These aren't complicated questions, but they require a moment of friction that AI, by design, eliminates. Before AI, as Hassid points out, starting something was hard enough that you were at least forced to briefly consider whether it was worth starting. That friction was annoying, but it was also a filter.

The resource dimension makes this more urgent than it might seem. AI inference is not free. The energy consumption of large language models is substantial and growing. When I ask AI to draft a report nobody will read or an analysis that won't change any decisions, I'm not just wasting my own time. There's a real resource cost behind each prompt, increasingly powered by a strained electrical grid. That's not an argument against using AI, but it's a reason to use it deliberately.

What this really comes down to is that AI amplifies your prioritization ability - or your lack of it. If you're already good at distinguishing high-value work from low-value work, AI makes you significantly more effective. If you're not, it makes you very busy in an unproductive direction. The tool doesn't fix the underlying judgment problem. It scales it.

Hassid's framing of "performative productivity" resonates because it's describing something lean practitioners have always understood: motion is not progress. Looking busy is not the same as adding value. AI doesn't change that distinction. It just makes the illusion easier to sustain.

The question to ask before the prompt isn't "how should I do this?" It's "should I do this at all?"