The Pace We Didn't Plan For

A few months ago, the dominant AI concern was jobs. Would it take yours? Which roles were safe and which weren't? That felt like the right level of worry for a technology that, as many still describe it, essentially guesses the next word in a sequence.
Then Anthropic released Mythos, and the conversation shifted rather abruptly to critical infrastructure and existential risk.
I'll offer something small before getting to the larger point. I follow AI development more closely than most people I know, use these tools regularly, and have been writing about their expanding capabilities for months. I'm still nowhere near understanding or using the full extent of what current models can already do. That's not false modesty; it's a useful data point for what follows.
A friend who works at one of the top AI labs told me something a few months ago that stuck with me. His company has models ready for deployment that they're deliberately holding back. Not for technical reasons. Because the capabilities would frighten people, and they want the public to grow more comfortable with existing tools before the next leap arrives. Sit with that for a moment. We're at a point where technology companies are managing the psychological pacing of disclosure, rationing what the public is allowed to understand their systems can do. And if Mythos is any indication, even the developers may not fully grasp what they've built, or they're worried enough about it to act accordingly.
Mythos is Anthropic's latest frontier model, the accepted term for systems sitting at the leading edge of what AI can currently do. Anthropic shared access with just 11 organizations, all American, plus Britain's AI Security Institute as the sole international exception. That arrangement alone set off a global scramble. Within weeks, Mythos had identified thousands of zero-day vulnerabilities (previously unknown software flaws) across every major operating system and browser, achieving an 83 percent exploit success rate on the first attempt. Those aren't abstract targets. The same software categories run the control systems embedded in power grids, water treatment facilities, financial networks, and nuclear plants. Anthropic itself called the model too powerful for broad public release. The Bank of England governor warned publicly about cracked global cyber-risk. Canada's finance minister compared the threat to a closure of the Strait of Hormuz.
This is not an autocomplete concern.
The deeper issue isn't Mythos specifically. OpenAI, DeepSeek, and others will have comparable frontier models within 18 months by Anthropic's own estimate, and some may be less deliberate about controlled release. Mythos is a crystallizing moment, not a singular one. The question it forces is structural: how did we arrive at a point where a private company in San Francisco determines which nations and organizations have access to something with weapons-test-level geopolitical consequences?
The answer is that this was always going to happen given how AI was built. Historically, the most dangerous technologies emerged from government institutions. The Manhattan Project wasn't funded by venture capital. Government development meant government control from the beginning, with classification systems, internal oversight, and at least the theoretical possibility of international frameworks to follow. AI was different. It was built by private companies, with private capital, at private speed. By the time any regulatory body understood what was being created, the capability was already in private hands, with private criteria for access. Anthropic's decision to initially share Mythos with 11 American companies isn't a conspiracy. It's the predictable consequence of a development model that outpaced every governance structure that might have shaped it differently.
Which brings us to the guardrails problem, and it has no clean solution. Countries that implement serious AI constraints operate at a disadvantage against those that don't. Responsible actors are structurally penalized relative to reckless ones. Governments that want guardrails face a genuine dilemma: restrain development and risk being overwhelmed by adversaries who won't, or race ahead and contribute to the instability they're trying to prevent. No nation is going to accept an AI nonproliferation framework if they believe their adversaries will ignore it. The incentive structure points in one direction only. Everyone feels they must invest rapidly or risk falling irreversibly behind. And "irreversibly" isn't rhetorical here. We're talking about gaps that open over weeks or months, not years. No previous economic or technological threat, not offshoring, not semiconductor competition, has materialized at that speed. This race has no finish line and no agreed-upon rules, and the laps are getting shorter.
Europe is the clearest illustration of what this looks like in practice, though it stands in for most of the world. The EU has genuine regulatory ambitions, has passed the AI Act, and has the institutional capacity to at least attempt coordinated action. And yet European governments look at the energy infrastructure, compute investment, and capital availability in the United States and China and recognize that their regulatory environment and capital markets structurally cannot match it. Even careful deliberation of a few months may now be disqualifying in this race. Nations with fewer institutional resources than Europe, which is most of them, don't have that choice to make. They're largely observers of a contest whose outcome will shape their future regardless.
What makes all of this genuinely unsettling isn't just the technology. It's the timeline. We moved from debating AI's impact on employment to confronting its implications for critical infrastructure security in roughly twelve months. Democracies don't build international consensus in twelve months. Regulatory frameworks don't form in twelve months. Public understanding certainly doesn't shift that quickly, which is why a meaningful portion of the population still thinks of these systems primarily as sophisticated text generators. That perception gap matters, because the political will to address a problem depends on people first understanding what the problem actually is.
Something comparable to Mythos will arrive from another lab soon enough, perhaps developed somewhere less concerned with coordinated defense. And worth noting as a final layer of sobriety: quantum computing, still a few years from practical deployment at scale, is advancing faster than most expected. Researchers are increasingly confident it will dramatically amplify AI capabilities when it arrives, compressing already-short development cycles further. If the current pace feels difficult to govern, it's worth asking what governance looks like when that acceleration arrives.
The question worth sitting with isn't what we do about this specific model. It's whether we're capable of building the kind of global coordination this moment requires, at the speed it requires. We've never had to do that before. A year ago, we were debating whether AI would write your emails.
What will we be debating a year from now?