"We Don't Use Anthropic." Think Carefully Before You Put That In Writing.

A friend of mine recently received a letter from a U.S. government-aligned customer directing their company to immediately cease use of all Anthropic AI tools. Their instinct was to fire off a quick reply: "We do not use Anthropic in connection with your account and will ensure no such tools are used going forward."
I urged them to stop and consider the deeper ramifications.
Anthropic Is Not Just Claude
Most people understand AI at the brand level: ChatGPT, Claude, Gemini, Perplexity, Copilot. That's the consumer-facing layer, and it's where most mental models stop. But beneath it sits an entirely different reality.
Anthropic's API is embedded across the AI ecosystem, often invisibly. Amazon Bedrock and Google Cloud Vertex AI both offer Claude as a selectable backend. Multi-agent frameworks like LangChain and CrewAI can dynamically route tasks to Anthropic models within larger workflows. Hundreds of third-party AI tools run on Anthropic's API without advertising it anywhere in their documentation.
Consider Google. It sells Gemini as its flagship AI product, yet simultaneously offers Claude through Google Cloud Vertex AI, holds a 14% ownership stake in Anthropic, has invested over $3 billion across multiple rounds, and signed a cloud infrastructure partnership worth tens of billions of dollars. Competitor, investor, and infrastructure provider, all at once. If your technology stack touches Google Cloud in any meaningful way, you may already be closer to Anthropic than you think.
A Commitment You Can't Verify
Commit to "no Anthropic" today and you may be in violation before the ink is dry, through tools you already use and didn't build yourself. This is functionally unverifiable. You'd need to continuously monitor every third-party tool in your stack, track API dependencies several layers deep, and watch every acquisition in the space, because a tool that's Anthropic-free today may not be after the next transaction closes. More importantly, you'd be making a potentially false representation, which is far more legally dangerous than pushing back on an overreaching directive in the first place.
We've been here before. Two decades ago, enterprises issued sweeping directives to prohibit open source software, citing security concerns and lack of vendor accountability. The problem was that open source was already woven so deeply into commercial software stacks that enforcement was effectively impossible. The issue didn't get resolved so much as it quietly became irrelevant, and open source is now universally accepted infrastructure. Foundational AI providers like Anthropic are on the same trajectory, and the directives trying to wall them off are likely to age just as poorly.
You're Also Setting a Precedent
There's a broader business problem that has nothing to do with AI. When you modify your standard terms or workflow in response to a unilateral customer directive, you've established that your contracts are negotiable on demand. The next request won't feel like an exception; it will feel like standard practice, because you've made it one.
And this particular request isn't trivial to accommodate. Segregating a customer onto an Anthropic-free workflow, if that's even achievable, means building and maintaining a parallel process that diverges from how you serve everyone else. You've effectively created a custom product for one customer, with the ongoing operational and compliance verification costs that come with it, none of which were priced into the original agreement.
Then consider the litigation environment. Large enterprises with deeply embedded Anthropic-based workflows are unlikely to simply comply. Anthropic itself has both the resources and the incentive to push back legally. This situation has the structure of the recent government directives aimed at major law firms: a sweeping demand, calibrated to see who folds. The firms that complied early, hoping to avoid conflict, found themselves bound by commitments that became embarrassing liabilities when the government eventually retreated and the firms that refused faced no consequences at all. Compliance that looks prudent in week one can look naive by month three.
The Adoption Curve Is Steepening
The AI landscape is consolidating and expanding simultaneously at a pace unlike anything we've seen in enterprise technology. New agents, orchestration layers, and vertical applications are launching weekly, and the underlying model relationships are increasingly opaque and intertwined. Locking yourself out of one major platform is not a one-time compliance decision; it's a compounding constraint that gets more expensive over time.
The market also just sent a clear signal about Anthropic's trajectory. When Anthropic refused a Pentagon demand to deploy its models for mass domestic surveillance and autonomous weapons systems, and walked away from a reported $200M contract to hold that line, the public responded decisively. Claude hit number one on the U.S. App Store, overtaking ChatGPT. Daily signups quadrupled. Paid subscribers more than doubled. ChatGPT uninstalls spiked 295% in a single day. Anthropic traded a government contract for broad consumer and enterprise trust at a pivotal moment. That's not a platform you want to be contractually prohibited from using just as its adoption curve steepens.
The political pressure driving these supplier letters is real, and I don't dismiss it. But before your team fires off a quick compliance response, make sure you understand what you're actually agreeing to. A strong opinion carefully expressed is almost always better than a hasty commitment you can't keep.