Anthropic, the Silicon Valley artificial intelligence company behind the Claude chatbot, filed suit against the Trump administration Monday, after the company was blacklisted and labeled a threat to U.S. national security. The lawsuit marks a dramatic escalation in what has become one of the most contentious standoffs between a private tech firm and the federal government in recent memory.
Anthropic filed two separate lawsuits — one in California federal court and another in the federal appeals court in Washington, D.C. — each challenging different aspects of the Pentagon’s actions against the company.
At the heart of the dispute is a designation that has historically been reserved for foreign enemies. The Pentagon formally designated the San Francisco tech company a supply chain risk — the first time the federal government is known to have used the designation against a U.S. company. The label effectively bars defense contractors and vendors from using Anthropic’s AI models in any work tied to the Department of Defense.
The conflict stems from a contract negotiation that broke down over two bright lines Anthropic refused to cross. The company sought assurances that its AI tool would not be used for mass surveillance of U.S. citizens or for fully autonomous weapons. The Pentagon, however, insisted on using Anthropic’s AI for “all lawful purposes,” saying it could not allow a private company to dictate how it uses its tools in a national security emergency.
President Trump weighed in personally. Trump shared a post on social media directing federal agencies to “immediately cease” all use of Anthropic’s technology, writing: “WE will decide the fate of our Country — NOT some out-of-control, Radical Left AI company run by people who have no idea what the real World is all about.”
The company’s complaint states: “These actions are unprecedented and unlawful. The Constitution does not allow the government to wield its enormous power to punish a company for its protected speech. No federal statute authorizes the actions taken here.”
The company’s legal argument rests on two pillars. First, Anthropic claims the designation punishes it for being outspoken about its views on AI policy — a violation of its First Amendment rights. Second, it challenges the statutory authority underpinning the Pentagon’s designation, arguing Congress required the department to use the least restrictive means to protect national security, not punish a supplier.
The Pentagon, for its part, frames the dispute differently. Department officials say this has always been about the military’s ability to use technology legally, without a vendor inserting itself into the chain of command and putting warfighters at risk.
The financial stakes are enormous. Most of Anthropic’s projected $14 billion in revenue this year comes from businesses and government agencies using Claude for computer coding and other tasks, and more than 500 customers are paying Anthropic at least $1 million annually. The company’s complaint warns the actions could jeopardize hundreds of millions of dollars in revenue.
Notably, the irony of the situation has not been lost on observers. Anthropic’s models have continued to be used to support U.S. military operations in Iran, even after the company was blacklisted. The Pentagon has been given six months to phase out a product now deeply embedded in its classified systems.
Despite the litigation, Anthropic says it remains committed to national security work. “Seeking judicial review does not change our longstanding commitment to harnessing AI to protect our national security, but this is a necessary step to protect our business, our customers, and our partners,” a company spokesperson said.



