Anthropic Sues Pentagon Over 'Supply Chain Risk' Designation
Anthropic has filed two lawsuits against the Pentagon challenging its designation as a "supply chain risk" to national security, a label historically reserved for foreign adversaries. The legal action, filed on March 9, 2026, stems from a breakdown in negotiations over a $200 million defense contract and raises profound questions about government power, free speech, and the future of AI in military applications.
Background of the Dispute
The conflict originated when the Pentagon sought to renegotiate Anthropic's government contracts to permit "all lawful use" of its Claude AI system. CEO Dario Amodei established two specific red lines: the company would not allow Claude to be used in fully autonomous weapons systems or for mass surveillance of American citizens. When the Pentagon refused to accept these conditions and instead designated Anthropic as a supply chain risk on February 27, 2026, negotiations collapsed.
Legal Arguments
Anthropic filed lawsuits in U.S. District Court in Northern California and the D.C. Circuit Court of Appeals. The company argues that the designation violates its First Amendment rights by penalizing it for vocal advocacy on AI safety policy. Anthropic also contends the Pentagon lacks statutory authority for the designation under 10 U.S.C. ยง 3252, which requires employing the least restrictive means to address supply chain concerns rather than blacklisting a supplier entirely.
"The Pentagon has every right to decline to do business with us," Anthropic wrote in a public statement. "But it cannot label us a security threat because of our policy positions and our refusal to weaken safety guardrails."
Industry Support
More than 30 researchers from OpenAI and Google DeepMind, including Google DeepMind chief scientist Jeff Dean, filed an amicus brief supporting Anthropic's lawsuit. This unusual show of solidarity from direct competitors highlights the industry's concern over the precedent set by the Pentagon's actions.
Meanwhile, Palantir CEO Alex Karp has publicly stated the company continues to use Anthropic's Claude models despite the government designation, noting that replacing integrated systems will take time.
Broader Implications
The outcome could significantly impact the AI industry and U.S. competitiveness against China. Anthropic contends that while the Pentagon has the right to decline collaboration, it cannot label the firm a security threat based on policy disagreements rather than genuine security concerns. President Trump called for all government agencies to cease using Anthropic products, with potential further escalation through executive order.
The case is likely to become a landmark in the developing legal framework around AI governance, military applications, and the limits of government power over technology companies.