AI Startup Challenges Government Blacklist Designation
Anthropic has submitted sworn declarations to a California federal court late Friday, pushing back on the Pentagon's assertion that the AI company poses an "unacceptable risk to national security" and arguing that the government's case relies on technical misunderstandings and claims that were never actually raised during months of negotiations.
The filings reveal a striking contradiction: internal Pentagon emails show government negotiators told Anthropic the two sides were "nearly aligned" or "very close" on resolving concerns about autonomous weapons and surveillance safeguards—just one week after President Trump and Defense Secretary Pete Hegseth publicly declared the relationship over.
What Led to This Dispute
The Pentagon designated Anthropic as a supply chain risk under the Federal Acquisition Supply Chain Security Act (FASCA) on March 4, 2026, after the AI company refused to grant unrestricted access to its Claude AI model for military use. Anthropic insists on safeguards against mass surveillance and autonomous weapons without human oversight, which the Pentagon views as unacceptable business restrictions.
The government warns of "significant risk" that Anthropic could subvert defense AI systems, leading to a six-month offboarding extension despite the immediate effect of the designation. This followed late February statements by Trump and Hegseth cutting ties, plus a directive halting federal use of Claude amid a canceled $200 million contract.
Industry Rallies Behind Anthropic
Major tech players and military experts have filed briefs supporting Anthropic's challenge. Microsoft urged the court to temporarily lift the designation for "reasoned discussion," citing economic harm and the unprecedented use of such powers against a U.S. company. Nearly 150 retired judges from across the political spectrum filed an amicus brief calling the action unlawful and procedurally flawed. Retired military chiefs warned of risks to ongoing operations, including activities related to Iran.
"This appears to be retaliation for Anthropic's AI safety views, which are protected speech, rather than a valid security measure," the company's legal team stated in court filings.
What's Next
A hearing is scheduled for March 24, 2026, before U.S. District Judge Rita Lin in San Francisco. Legal experts predict the designation may not hold due to statutory limits and contradictions in allowing phased removal while claiming immediate danger.
The case tests the government's leverage over AI firms amid U.S.-China competition and could set precedents for how national security powers apply to the emerging AI industry.