AI Haven
AI News

Federal Judge Blocks Trump's Pentagon Ban on Anthropic in First Amendment Victory

Federal judge grants Anthropic preliminary injunction against Pentagon ban, calling the government's actions "Orwellian" and designed to punish protected speech.

March 28, 2026

Federal Judge Grants Anthropic Preliminary Injunction Against Pentagon Ban

A federal judge has temporarily blocked the Trump administration's attempt to blacklist Anthropic from government contracts, ruling the ban appeared designed to punish the AI company for its protected speech. U.S. District Judge Rita Lin issued the preliminary injunction on March 26, calling the government's actions "Orwellian" and warning they would cause irreparable harm to Anthropic's business and reputation.

The Pentagon designated Anthropic a "supply chain risk" on February 28, 2026, after the company refused to remove ethical safeguards from its Claude AI model that block mass surveillance and fully autonomous weapons use. The Department of War demanded Anthropic provide its AI for "all lawful purposes" without restrictions, effectively requiring the removal of safety measures that already complied with existing federal law and Pentagon policy.

Anthropic CEO Dario Amodei has maintained that the company's position reflects technical judgment about model reliability, not ideology. The company argued that its safety guardrails exist precisely because unconstrained AI deployment in military contexts poses genuine risks.

Hours after Anthropic's ban took effect, OpenAI secured its own Pentagon contract. Unlike Anthropic, OpenAI agreed to provide its models for defense use while maintaining what it described as stronger technical safeguards rather than policy-based restrictions. The deal drew scrutiny from some AI safety researchers who noted the ethical distinctions between the two companies' approaches.

The injunction temporarily halts enforcement of both the direct ban on federal agencies using Claude and the Pentagon's supply chain risk designation. The case will proceed through the courts, with Anthropic seeking a permanent reversal of the administration actions.

The ruling represents a significant victory for Anthropic and could establish precedent for how AI companies can maintain ethical positions while doing business with the U.S. government. The case highlights the tension between military demands for unrestricted AI access and companies that prioritize safety constraints as a matter of principle.

Source: TechCrunchView original →