Pentagon labels Anthropic a "supply-chain risk" as AI wars escalate
The US Department of Defense has officially designated Anthropic—a leading AI company behind Claude—a "supply-chain risk," escalating a high-stakes conflict over the conditions under which military forces can use artificial intelligence. The designation prevents military contractors from working with Anthropic and threatens the company's $200 million Pentagon contract.
The decision, first reported by The Wall Street Journal on March 6, came after weeks of failed negotiations between Anthropic and the Pentagon. At the core of the dispute: Anthropic's AI safety standards prohibit the use of its models for autonomous weapons and mass domestic surveillance—conditions the company refuses to remove.
"We do not believe this action is legally sound, and we see no choice but to challenge it in court," said Anthropic CEO Dario Amodei.
Legal experts back Anthropic's challenge
Legal analysts see strong grounds for Anthropic's lawsuit. The company can challenge the designation under the Administrative Procedure Act as "arbitrary and capricious," with direct precedent from Luokung Technology Corp. v. Department of Defense (2021) supporting relief against unsupported Pentagon designations.
Defense Secretary Pete Hegseth's public statements may significantly weaken the government's position. He accused Anthropic of "arrogance and betrayal," "duplicity," and "defective altruism"—language that legal observers say undercuts claims of objective national security analysis.
"Every layer of the government's position has serious problems," noted one legal expert. "Any single issue could be independently fatal to the designation's legal standing."
Consumer market reacts dramatically
The conflict has already sent ripples through the consumer AI market. Following OpenAI's announcement of its own Pentagon partnership on February 28, ChatGPT uninstalls surged 295% day-over-day—a spike over 30 times higher than the app's typical 9% daily uninstall rate. One-star reviews jumped 775% while five-star ratings dropped by 50%.
Meanwhile, Claude capitalized on the backlash, with downloads surging 51% on February 28 as Anthropic publicly declined to partner with the Department of Defense. By March 2, 2026, Claude achieved the top ranking on the US App Store for the first time in its history, surpassing ChatGPT. The app jumped over 20 ranks compared to February 22 and secured the top ranking in six other countries including Canada and Germany.
The shift reflected user sentiment favoring AI companies with stronger ethical standards. According to market intelligence firm Sensor Tower, Claude's US downloads in the week following the controversy were approximately 20 times higher than January levels.
What comes next
Anthropic's lawsuit could take months or years to resolve, but the case will likely set precedent for how AI companies can negotiate with federal agencies over safety guardrails. Meanwhile, OpenAI has absorbed the Pentagon contract that Anthropic rejected—and appears to be weathering the consumer backlash, at least for now.