Sliq Logo Sliq

Anthropic is suing the Pentagon. Here's what it means.

Two weeks ago, the Pentagon designated Anthropic a supply chain risk — a label normally reserved for foreign adversaries. We covered what that meant for startups building on Claude the day it happened.

Since then, the situation has escalated dramatically. Anthropic filed two federal lawsuits. Over 30 OpenAI and Google DeepMind employees — including Google's chief scientist — filed an amicus brief supporting Anthropic. The Pentagon's CTO went on CNBC and said Claude would "pollute" defense systems. Palantir's CEO confirmed Claude is still active in defense tools. And Lockheed Martin told employees to stop using it.

If you build on Claude's API, use Claude Code, or rely on any Anthropic product for your business, this is the most consequential AI story of 2026. Here's what happened, what's likely to happen next, and what you should actually do about it.

What happened since the blacklist

Here's the timeline of the last two weeks, stripped of noise.

February 24: Defense Secretary Pete Hegseth gave Anthropic CEO Dario Amodei a deadline — agree to "any lawful use" of Claude by the military by 5:01 PM on February 27, or face consequences. The dispute centered on two restrictions in Anthropic's existing $200 million Pentagon contract: no mass surveillance of Americans, and no fully autonomous weapons without human oversight. Both restrictions had been in place since the contract was signed in July 2025. Neither had blocked a single mission.

February 26: Anthropic publicly refused to budge.

February 27: Trump posted on Truth Social directing all federal agencies to "immediately cease" using Anthropic's technology. Hegseth designated Anthropic a supply chain risk. OpenAI signed its own Pentagon deal within hours.

March 4: The Financial Times reported Anthropic reopened talks with the Pentagon. Separately, the Washington Post reported Claude was being used in military operations against Iran — after the ban.

March 5: Anthropic confirmed it received the formal supply chain risk notification. The letter did not explain what specific security risk Claude poses.

March 9: Anthropic filed two lawsuits — one in San Francisco federal court, one in the D.C. Circuit Court of Appeals. The complaint calls the designation "unprecedented and unlawful" and alleges government retaliation for protected speech. Hours later, over 30 OpenAI and Google DeepMind employees — including Google chief scientist Jeff Dean — filed an amicus brief supporting Anthropic. They signed in their personal capacities, not on behalf of their companies.

March 12: Pentagon CTO Emil Michael went on CNBC and gave the clearest explanation to date. He said Claude would "pollute" the defense supply chain because it has "a different policy preference" baked in — a reference to Anthropic's constitutional AI training, which shapes how Claude handles ethical tradeoffs. The same day, Palantir CEO Alex Karp confirmed his company is still using Claude in its defense products. An internal Pentagon memo, reported by CBS News, said exemptions would be granted for "mission-critical activities" where "no viable alternative exists."

The contradictions at the center of this

The government's position has a problem that legal experts have been quick to flag: it contradicts itself on nearly every dimension.

The Pentagon says Claude is a supply chain risk that must be removed — but also that it's safe to keep using for six more months during a transition. The military used Claude in active operations against Iran the same week it was banned. Earlier that same week, Hegseth threatened to invoke the Defense Production Act to force Anthropic to provide Claude — on the theory that it was too essential to forgo. Days later, Trump said "We don't need it, we don't want it."

Lawfare published an analysis arguing the designation "exceeds what the statute authorizes" and that the government's public statements "may have doomed the government's litigation posture before it even begins." The core argument: you can't credibly claim a product is an acute national security threat while simultaneously relying on it for active combat operations and granting yourself six months to stop.

What the amicus brief actually says

The filing from OpenAI and Google employees is remarkable for a simple reason: competitors almost never do this. Nineteen OpenAI researchers and over ten Google DeepMind researchers signed, including Jeff Dean — one of the most influential figures in the field.

Their central argument: if the Pentagon was unhappy with Anthropic's contract terms, it could have canceled the contract and hired another provider. Designating an American company a supply chain risk — a tool designed for foreign adversaries — was an "improper and arbitrary use of power."

They also made a point about what happens next if this stands. Right now, there are no federal laws governing AI use for surveillance or autonomous weapons. In that vacuum, the guardrails that AI companies build into their own products are the only formal protection that exists. If the government can blacklist a company for maintaining those guardrails, other companies will stop building them.

The brief specifically warned of a "chilling effect" on the entire industry — not just Anthropic.

The financial picture

Your existing Claude API access is fine. The designation applies to defense contract work, not commercial use. But the second-order effects are where it gets complicated.

Anthropic's CFO Krishna Rao said in a court filing that across the company's entire business, adjusting for how likely each customer is to take a worst-case reading, the government's actions "could reduce Anthropic's 2026 revenue by multiple billions of dollars." The $200 million Pentagon contract isn't the issue — Anthropic's projected 2026 revenue is over $14 billion. The issue is whether large enterprise customers with defense contracts get nervous and start looking for alternatives.

That concern isn't theoretical. Lockheed Martin has already told employees to stop using Claude. Other defense contractors are reportedly reviewing their Claude usage and developing contingency plans. Law firms advising government contractors have recommended inventorying all Anthropic exposure and preparing segregation plans.

On the other hand, the consumer side tells a different story. Claude became the number one AI app in the US App Store after the blacklist, surpassing ChatGPT for the first time. Anthropic said more than a million people were signing up for Claude per day at the peak of the dispute. The company's most recent fundraising round valued it at $380 billion, with over 500 customers paying at least $1 million annually.

What's likely to happen

Legal experts broadly agree the designation is on weak legal ground. The statute requires a risk assessment, notice, an opportunity to respond, and notification to Congress — requirements Anthropic says weren't followed. The government's own contradictory statements make the "necessity" finding hard to defend. And the first-of-its-kind use of a foreign-adversary designation against an American company raises constitutional questions about retaliation for protected speech.

But "likely to lose eventually" and "goes away soon" are two different things. Courts sometimes defer to national security claims in the short term. Anthropic has asked for a temporary restraining order, but it hasn't been granted yet. The six-month phaseout period gives the Pentagon time to maneuver. A full resolution could take months or longer.

The most probable outcome: Anthropic gets a preliminary injunction blocking enforcement while the case plays out, the government eventually drops or narrows the designation, and both sides negotiate a face-saving resolution that restores some form of military contract with modified terms. But none of that is guaranteed, and the timeline is uncertain.

What this means for you

If you're building on Claude for commercial products — SaaS, internal tools, customer-facing applications — the direct legal risk is zero. Your API access isn't changing. Your Claude Code workflows aren't affected. Cowork keeps working.

If you sell to enterprise customers who hold defense contracts, the risk is real but manageable. Their procurement teams may ask whether you use technology flagged as a supply chain risk. Have an answer ready. The formal designation is narrow — it applies to Pentagon contract work, not all commercial activity — but procurement officers tend toward caution.

If you're making infrastructure decisions right now about which AI provider to build on, this dispute is a data point in a larger pattern. The three biggest AI providers now have three completely different postures on military use. OpenAI accepted unrestricted access. Anthropic refused. Google is filling the gap while its chief scientist signs briefs supporting Anthropic. Building on any single provider means inheriting their political exposure. That was always true — it's just visible now.

The deeper question isn't whether Anthropic survives this dispute (it almost certainly will). It's whether you're comfortable with the vendor risk that comes with building on any single model provider in an environment where AI companies are increasingly entangled with government contracts, military operations, and political conflicts.

For what it's worth, the fact that Anthropic's competitors' own employees filed a legal brief supporting them — that rivals said "this is wrong" at potential cost to themselves — suggests the AI industry understands something important: if the government can weaponize a supply chain designation against a company for holding ethical positions, nobody is safe.


FAQ

Is Claude's API still working? Yes. The supply chain risk designation applies to defense contract work only. All commercial Claude products — API, Claude Code, Pro, Max, Cowork — are unaffected.

Could Anthropic lose this lawsuit? Legal experts believe the designation is on weak ground, but courts sometimes defer to national security claims in the short term. The designation could remain in effect for months while the case plays out. Lawfare's analysis is the most detailed legal breakdown available.

Is the Pentagon still using Claude? Yes. Palantir's CEO confirmed it. CNBC reported it's still active in Iran operations. An internal Pentagon memo allows mission-critical exemptions. The CTO said they can't "just rip it out."

Should I stop building on Claude? Not because of this designation — it doesn't affect commercial use. But any major vendor dispute is a reminder to avoid single-provider lock-in. If Claude is your only model, consider how your stack would handle a disruption.

Did OpenAI benefit from this? OpenAI signed a Pentagon deal within hours of the blacklist. But 19 OpenAI researchers signed the amicus brief supporting Anthropic, and OpenAI's head of robotics resigned over the Pentagon contract. The company is divided internally.

How does this affect Anthropic's commercial business? The CFO said it could reduce 2026 revenue by "multiple billions" in a worst case. But consumer signups surged, Claude hit number one in the App Store, and the company is valued at $380 billion. The military contract was $200 million of a projected $14 billion in revenue.


This post updates our earlier coverage: What Anthropic's Supply Chain Risk Label Means If You Build on Claude. See also: GPT-5.4 vs Claude Opus 4.6, Why Your AI Tools Don't Talk to Each Other, and AI Executive Assistants in 2026.

Last updated: March 2026

Multiply yourself with Sliq

Sliq connects to all your tools and can do anything you can - just ask it in Slack.

Try Sliq Free