AI Haven
AI News

Anthropic Sues Pentagon Over 'Supply Chain Risk' Designation, Citing First Amendment Violation

Anthropic sued the Pentagon after being designated a "supply chain risk" for refusing to remove AI safety guardrails that block autonomous weapons and mass surveillance use.

March 14, 2026

Anthropic Takes Legal Action Against Pentagon Blacklisting

Anthropic has filed two lawsuits against the U.S. Department of Defense after the Trump administration designated the company a "supply chain risk," effectively banning government agencies and military contractors from using its Claude AI model. The lawsuits, filed in U.S. District Court for the Northern District of California on March 9, 2026, mark the first time such a designation has been applied to a U.S. company.

The conflict stems from Anthropic's refusal to remove safety guardrails in Claude that block its use for autonomous weapons and mass surveillance of American citizens. Pentagon officials demanded the company loosen these restrictions to grant unrestricted military access to Claude, which already operates on classified military networks. When Anthropic refused, the administration responded with the blacklisting.

First Amendment and Statutory Arguments

Anthropic's legal challenge makes two primary arguments. First, the company claims the designation constitutes a First Amendment violation, arguing the government is punishing it for its stated policy positions on AI safety rather than any legitimate security concern. The company contends that while the Pentagon has the right to choose alternative AI providers, it cannot blacklist a company as a national security threat based on policy disagreements.

Second, Anthropic argues the designation exceeds authority under 10 U.S.C. § 3252, which requires the Department of Defense to adopt the "least restrictive measures" when managing supply chain risks. The company seeks court orders to overturn the label, block enforcement, and void agency mandates to terminate Claude usage—without requiring Anthropic to accept government contracts.

Microsoft, Cato Institute File Support

The case has drawn significant industry support. Microsoft, which has partnered with Anthropic through its Azure infrastructure, has filed a motion asking the court to block the Pentagon's designation. The Cato Institute, a leading policy think tank, has also filed legal briefs supporting Anthropic, warning that the blacklist sets a dangerous precedent for governments seeking to control how private technology companies develop their products.

The Pentagon had previously used Claude for military operations, including identifying targets in Iran, making the foreign-adversary-style label particularly contentious. President Trump publicly urged agencies to drop Anthropic via Truth Social, adding political pressure to the technical dispute.

Source: TechCrunchView original →