Close
newsletters Newsletters
X Instagram Youtube

Anthropic vs. Trump: Lawsuit that could redraw rules of military AI

Angled close up of Anthropic name on a mobile screen with blurred American flag background, Mar. 2, 2026 (Adobe Stock Photo)
Photo
BigPhoto
Angled close up of Anthropic name on a mobile screen with blurred American flag background, Mar. 2, 2026 (Adobe Stock Photo)
March 10, 2026 12:16 PM GMT+03:00

Artificial intelligence company Anthropic filed two federal lawsuits Monday against the Trump administration, challenging a Pentagon decision to label it a "supply chain risk." The designation prevents defense contractors from using Anthropic's technology and, according to the company, could threaten "hundreds of millions of dollars" in near-term revenue.

The lawsuits, filed in the U.S. District Court for the Northern District of California and the U.S. Court of Appeals for the District of Columbia, argue the actions are "unprecedented and unlawful." Anthropic also contends that the federal government retaliated against it for its views on artificial intelligence (AI) safety, a position the company argues is protected speech under the First Amendment.

"The Constitution does not allow the government to wield its enormous power to punish a company for its protected speech," Anthropic said in the filing. "Anthropic turns to the judiciary as a last resort to vindicate its rights and halt the Executive's unlawful campaign of retaliation."

The company is also seeking a temporary restraining order to continue selling to the government while the case proceeds.

How it came to this

The designation was formally issued by Defense Secretary Pete Hegseth last week, after negotiations with Anthropic over usage restrictions on its AI model, Claude, collapsed on Feb. 27.

The core dispute centers on two restrictions Anthropic sought to maintain: a ban on using Claude for fully autonomous weapons without human oversight, and a prohibition on its use for mass surveillance of Americans. The company said its AI tools are not yet reliable enough for these high-risk applications.

The U.S. Department of War rejected these conditions, insisting it must have the right to make "all lawful use" of the technology. Officials said private companies cannot decide how the government defends the country.

President Donald Trump escalated the dispute on social media by ordering all federal agencies to "immediately cease" using Anthropic's products. More than a dozen federal agencies, including the Treasury and State departments and the General Services Administration, are listed as defendants in the lawsuit.

The White House said the military must retain full flexibility over its AI tools and that no private company should be able to constrain its operations. A Department of War spokesperson declined to comment on the litigation.

The financial stakes

The supply chain risk label has historically been used to block foreign adversaries, particularly Chinese and Russian vendors, from U.S. military systems. Legal experts say applying it to a domestic American company is highly unusual and without clear precedent.

The company had signed a $200 million contract with the U.S. Department of War in July and was the first AI lab to deploy its technology across the department's classified networks. Anthropic said Claude remains the only AI model currently approved for classified use, although the Department of War has since cleared other platforms, including OpenAI's ChatGPT and Elon Musk's xAI, for use in classified systems.

OpenAI struck a deal with the Department of War shortly after the decision against Anthropic was announced, agreeing to measures to ensure its technology would not be used for mass domestic surveillance or to direct autonomous weapons.

Tech giants push back

The legal fight could set a significant precedent for how AI companies negotiate with the federal government. A coalition of major technology industry groups, including representatives of Apple, Google, Nvidia, Microsoft, Meta, IBM, Salesforce, and Oracle, urged the Trump administration to reconsider the designation, warning it could have a "chilling effect" on U.S. innovation by treating a domestic company as an adversary.

Despite the legal challenge, Anthropic said it does not want to fight the U.S. government and that a settlement remains possible. CEO Dario Amodei said last week that "productive conversations" with the Department of War were ongoing, though a Department of War official said there were no active negotiations.

Anthropic clarified that the designation has "a narrow scope," affecting only contractors using Claude directly in work for the Department of War. The company said the "vast majority" of its customers, more than 500 of whom pay at least $1 million a year for Claude, would be unaffected. Major cloud partners Amazon, Microsoft, and Google confirmed they would continue offering Anthropic's tools for non-defense of war work.

The company projects $14 billion in total revenue for 2025, with most coming from businesses and government agencies using Claude for tasks such as coding and data analysis. Anthropic was most recently valued at $380 billion.

U.S. military officials have mostly used Claude through third-party tools, such as Palantir's Maven Smart System, for tasks such as drafting documents and identifying targets, according to sources. If the designation is upheld, defense contractors like Palantir would be required to replace Claude with alternatives, a process Hegseth said could take up to six months.

March 10, 2026 12:16 PM GMT+03:00
More From Türkiye Today