
Isaac Asimov wrote his famous line about ignorance and democracy in a 1980 Newsweek column, and it's been bouncing around LinkedIn this week with the urgency of a newly minted insight. He wrote that anti-intellectualism has been "nurtured by the false notion that democracy means that 'my ignorance is just as good as your knowledge.'" Hard to argue with that, and forty-five years of evidence haven't done much to weaken his case.
Most of the comments on the post were predictable: agreement from the educated, resentment from those who felt targeted, and the obligatory reminder that a PhD doesn't automatically confer wisdom. (True, but also a bit convenient as deflections go.) One comment, though, stopped me cold. Dr. Leon Tsvasman, whose work sits at the intersection of philosophy, cybernetics, and what he calls "epistemic integrity," pushed Asimov's argument somewhere more uncomfortable.
The Deeper Problem
His point, roughly: the real danger isn't only that ignorance claims equality with knowledge. The deeper danger is that knowledge itself has been mistaken for orientation. A civilization can accumulate expertise, credentials, data, and perfectly legitimate discourse, and still remain radically disoriented. Truth isn't defended by information alone. It's defended by the capacity to distinguish relevance from noise, validity from plausibility, and judgment from mere participation. Democracy doesn't collapse only when ignorance is flattered. It also collapses when opinion, expertise, performance, and symbolic legitimacy all circulate on the same flattened plane, with no shared means of deciding which one actually counts.
Asimov diagnosed a problem with the inputs: too much ignorance being granted unearned standing. Tsvasman is pointing at a problem with the processing layer, and that's a meaningfully different diagnosis.
He frames the historical shift this way: humanity has moved from a world where scarcity organized intelligence to a world where synthetic abundance tests it. The old bottleneck was access to information. The new bottleneck is the criteria through which meaning, validity, and relevance get discerned. In that environment, plausibility can circulate faster than understanding. Volume overwhelms attention. Procedure outgrows judgment. And a society gradually loses what he calls the "vertical dimension" — the shared capacity to say that some things are simply more true, more binding, more real than others, regardless of who's asserting them or how confidently.
The Lean Connection
I keep coming back to this through the lens of lean manufacturing, which is probably my occupational hazard at this point. One of the foundational principles of the Toyota Production System is genchi genbutsu: go to the actual place, observe the actual thing. Not the report about the thing, not the dashboard summarizing it, not the consultant's synthesis. The thing itself. Each step away from direct observation is an opportunity for noise to masquerade as signal, and those distortions compound quickly.
What Tsvasman is describing is a civilization that has lost its genchi genbutsu at scale. And this is where the current moment gets genuinely alarming: we're now deploying AI systems that are extraordinarily good at generating fluent, credentialed-sounding knowledge at industrial volumes, with no orientation whatsoever. The output looks like information. It circulates like information. It gets cited like information. Whether it's actually tethered to anything real is a separate question the system has no reliable mechanism to answer. We've moved from scarcity of information to a glut of it, and the tools we're reaching for to manage that glut don't solve the underlying problem. They add more volume to an already cacophonous room.
Tsvasman has developed this thinking into a broader framework he calls the Sapiocratic Charter of Human Integrity. It's dense reading, but the core idea is that what civilization now needs is an "orientation layer" — not just better information, but an actual infrastructure for preserving judgment, discernment, and epistemic integrity in an age of enabling systems. Worth the effort if this kind of thing keeps you up at night.
What Do We Do About It
Richard Hofstadter traced the roots of American anti-intellectualism all the way back in his 1963 book of the same name, arguing that contempt for expertise got woven into our religious, political, and educational fabric almost from the founding. His diagnosis and Asimov's are essentially the same: we've romanticized willful ignorance as authentic, democratic, proof of not being an elitist snob. What neither of them fully anticipated is that the 21st century would add a second pathology on top of the first. Not just ignorance claiming equality with knowledge, but a glut of disoriented knowledge generating its own fog. Misinformation that sounds authoritative. Expertise deployed in service of predetermined conclusions. "Do your own research" as a phrase that means the exact opposite of what it says.
So what do we do with this? I'm genuinely not sure, which I realize is an unsatisfying place to land. But the lean practitioner's instinct is probably right: the answer starts with going back to the gemba. Direct observation over abstracted reporting. Asking "how do we know that?" before "what should we do about it?" Accepting, as any decent scientist or engineer already knows, that the most honest thing you can say is often "I don't know yet, but here's how I'd find out."
The orientation layer Tsvasman describes doesn't get rebuilt through better algorithms or more fact-checkers. It gets rebuilt one honest conversation at a time, in actual places, about actual things.
Asimov would probably have a wry comment about our odds.