Artificial intelligence company Anthropic has filed a lawsuit against the United States Department of Defense, challenging the government’s decision to label the company as a risk to the US supply chain following a dispute over the use of its AI technology.
The San Francisco-based firm submitted the complaint on Monday in a federal court in California, arguing that the government’s actions were unprecedented and unlawful. The company claims the designation effectively pushed federal agencies to stop using its technology and shift their artificial intelligence work to other providers.
Anthropic said the move resembles restrictions usually imposed on companies from nations that the United States considers strategic rivals.
Company Says Government Is Punishing It for Protected Speech
In its complaint filed in the US District Court for the Northern District of California, Anthropic argued that the federal government was improperly using its authority to penalise the company.
“These actions are unprecedented and unlawful,” the company stated in the filing. “The Constitution does not allow the government to wield its enormous power to punish a company for its protected speech.”
The legal challenge follows a formal notice issued by the Pentagon last week informing the company of the risk designation.
Anthropic Chief Executive Officer Dario Amodei said the government’s decision was not legally justified and left the company with no option but to seek legal action.
Dispute Over How the Pentagon Could Use Claude AI
The conflict between Anthropic and the Pentagon began last month when the US military sought broader access to the company’s AI systems.
The Pentagon reportedly wanted to use Claude, Anthropic’s chatbot, for any purpose allowed under US law without additional restrictions imposed by the company.
However, Anthropic insisted on certain safeguards for its technology. The company said its AI should not be used for mass surveillance of American citizens or for fully autonomous weapons systems.
These restrictions triggered tensions with the Defense Department, eventually leading to the government’s decision to blacklist the company from certain federal activities.
Pentagon Ordered Contractors to Cut Ties
Following the dispute, US Defense Secretary Pete Hegseth ordered the Pentagon on February 27 to bar contractors and their partners from engaging in commercial activity with Anthropic.
In a statement posted on the social media platform X, Hegseth said the department would give Anthropic six months to transfer its AI services to another provider.
The lawsuit names the Defense Department — which the Donald Trump administration refers to as the “Department of War” — along with more than a dozen other federal agencies as defendants.
The Defense Department has not yet issued an official response to the legal challenge.
Trump Criticised Anthropic’s Position
On the same day the Pentagon announced its decision, President Donald Trump sharply criticised the AI company.
Posting on his social media platform Truth Social, Trump accused the firm of attempting to pressure the government.
“The Leftwing nut jobs at Anthropic have made a DISASTROUS MISTAKE trying to STRONG-ARM the Department of War,” Trump wrote, adding that federal agencies should stop using the company’s AI tools.
Public Support Boosts Demand for Claude
Despite the government’s actions, Anthropic said its chatbot Claude saw a surge in popularity in the days following the risk designation.
According to the company, consumers and businesses drove “unprecedented demand” for the chatbot as a show of support for the company’s position on AI safeguards.
Anthropic was founded in 2021 by former employees of OpenAI and has quickly emerged as one of the leading competitors to the maker of ChatGPT.
Today, the company serves more than 300,000 business customers, with many using its AI systems for workplace automation and software development.
Its coding assistant Claude Code has become particularly popular among developers and businesses seeking AI tools to streamline programming tasks.
OpenAI Expands Pentagon Partnership
While Anthropic battles the government in court, rival AI company OpenAI has moved closer to the US military.
OpenAI recently announced an agreement allowing the Pentagon to deploy its AI models within classified government networks.
OpenAI CEO Sam Altman later said the company was working with the Defense Department to introduce additional safeguards around surveillance uses of AI technology.
Legal Case
The case, Anthropic v. US Department of War (26-cv-01996), is being heard in the US District Court for the Northern District of California in San Francisco.
The lawsuit could become a landmark legal battle over how governments can regulate or control private AI companies, especially when national security and technological safeguards collide.

