The Intelligence Explosion: Human-AI Evolution, Not Singularity

If you haven't used one of the major AI platforms in the last few weeks, you're making an increasingly costly mistake: judging a technology by a version that no longer exists.

I hear it constantly. "AI hallucinates." "It just makes things up." "I tried it and it was wrong about everything." These complaints were largely fair even twelve months ago. Today, not so much. The frontier models from OpenAI, Anthropic, Google, and others have improved at a pace that defies ordinary product development timelines. Hallucinations, while not extinct, have dropped dramatically, and most remaining errors trace back to vague or poorly constructed prompts – which, for the record, is also how you get bad answers from humans.

The agentic versions – AI systems that can plan, execute multi-step tasks, write and run code, and operate semi-autonomously alongside you on your desktop – are something qualitatively different from the chatbot you experimented with a year ago. Comparing them to Alexa is like comparing a pocket calculator to a Bloomberg terminal.

So why does this matter beyond keeping up? Because a paper just published in Science by James Evans, Benjamin Bratton, and Blaise Agüera y Arcas – three people about as deep in the AI development world as it gets – argues that we are already inside what may be the next great intelligence explosion, and most people haven't noticed because it doesn't look like the movies said it would.

The Singularity Got It Wrong

The dominant cultural narrative around AI's future has been the "singularity": a single, godlike machine intelligence bootstrapping itself to incomprehensible power, at which point humans become irrelevant or worse. It's a compelling story. It's also almost certainly wrong.

Evans, Bratton, and Agüera y Arcas argue that real, transformative intelligence has never worked that way. Every prior intelligence explosion in human history was social, not individual. Primate intelligence scaled with group size, not habitat difficulty. Human language created what developmental psychologist Michael Tomasello calls the "cultural ratchet": knowledge accumulating across generations without any individual needing to reconstruct it from scratch. Writing, law, and bureaucracy externalized collective intelligence into institutions that outlasted and outperformed any individual participant. A Sumerian grain accountant didn't comprehend macroeconomics; the system he worked within did.

AI extends this same sequence rather than rupturing it. Large language models are trained on the accumulated output of centuries of human social cognition -- the cultural ratchet made computationally active. What migrates into silicon isn't abstract reasoning; it's social intelligence encountering itself on a new substrate.

Intelligence Was Always a Conversation

Research on frontier reasoning models like DeepSeek-R1 found that they don't improve simply by "thinking longer." Instead, they spontaneously generate what the authors call a "society of thought" – internal debates among distinct cognitive perspectives that argue, question, verify, and reconcile. Nobody trained them to do this. It emerged purely from optimization pressure rewarding accuracy. The models rediscovered, on their own, what epistemology and cognitive science have long suggested: robust reasoning is inherently social, even inside a single mind.

This connects to something we explored in an earlier post. If human cognition is itself a prediction and pattern-matching process shaped by millions of years of social evolution, it probably shouldn't surprise us that intelligence optimized for accuracy spontaneously organizes itself as a conversation.

What's Already Happening

The intelligence explosion the authors describe isn't a future event. It's underway, and visible if you know where to look.

In drug discovery, AI is compressing timelines that once took decades. Rentosertib, developed by Insilico Medicine, is the first drug in which both the disease target and the molecule itself were identified entirely by generative AI, with no human hypothesis guiding either step. It's now in Phase III clinical trials for idiopathic pulmonary fibrosis.

The major AI labs are clearly reading the same signals. This week, Anthropic – the company behind Claude – quietly acquired Coefficient Bio, an eight-month-old startup with fewer than ten employees, for $400 million. Coefficient is building AI models for biological research with the explicit goal of what its founders called "artificial superintelligence for science." That Anthropic paid $400 million for a company that barely existed yet tells you something about where the smart money thinks this is heading.

Code generation may be the most quietly radical example. AI systems now write, test, debug, and refactor substantial codebases with minimal human intervention. Wasn't autonomous code generation one of the canonical markers of the singularity? It arrived without fanfare, bundled into a software subscription.

The authors describe the emerging configuration as "centaur" systems -- hybrid actors that are neither purely human nor purely machine. One human directing many AI agents; one AI serving many humans; shifting configurations throughout the day. This is not science fiction. It's Tuesday.

The Part Nobody Wants to Think About

All of this demands governance, and the paper is direct about it. The dominant model for AI alignment -- essentially a parent-child correction dynamic -- doesn't scale to billions of interacting agents. The authors propose something more structural: institutional alignment, where agents operate within defined roles and norms the way courtrooms and markets function regardless of who occupies the chairs. The U.S. Founders would have recognized the logic immediately. No single concentration of intelligence should regulate itself, whether human or artificial.

This is the piece that keeps getting deferred in favor of more exciting conversations about capability. It shouldn't.

The Framing That Actually Helps

The authors close with a line worth sitting with: intelligence growing like a city, not a single meta-mind.

Cities are messy, inefficient, occasionally chaotic, and nearly impossible to control from the top down. They're also where most of human progress has happened. The question isn't whether AI will become more powerful -- it will, faster than most people are prepared for. The question is whether we'll build social and institutional infrastructure worthy of what it's becoming.

The singularity was always the wrong frame. This is evolution. We've been here before. We just haven't done it with non-biological minds.

Tags
AI
Share