The Pentagon has threatened to designate artificial intelligence (AI) company Anthropic as a "supply chain risk" and sever ties with the firm if it refuses to lift restrictions on military use of its AI model Claude, according to multiple reports.
U.S. Defense Secretary Pete Hegseth summoned Anthropic CEO Dario Amodei to the Pentagon on Tuesday for what sources described as an ultimatum over the terms governing Claude's use by the U.S. military, Axios reported on Monday.
"Anthropic knows this is not a get-to-know-you meeting," a senior defense official told Axios, noting that, "This is not a friendly meeting. This is a sh*t-or-get-off-the-pot meeting."
Claude is currently the only AI model authorized for use within the U.S. Defense Department's classified systems and is considered the most capable model for sensitive defense and intelligence work, according to the reports.
The Pentagon wants Claude to be available to the military for "all lawful uses" and says it is unduly restrictive to have to clear individual uses with the company.
Anthropic is willing to loosen some existing usage restrictions but has drawn firm limits on two areas: mass surveillance of Americans and the development of weapons capable of firing without human involvement.
"The Department of War's relationship with Anthropic is being reviewed. Our nation requires that our partners be willing to help our warfighters win in any fight," Pentagon spokesperson Sean Parnell said in a statement.
"Ultimately, this is about our troops and the safety of the American people," he added.
An Anthropic spokesperson said the company "is having productive conversations, in good faith, with DoW on how to continue that work and get these new and complex issues right."
The company said it "is committed to using frontier AI in support of US national security."
The crisis intensified after it emerged that Claude was used via Anthropic's partner Palantir Technologies in the operation to capture Venezuelan President Nicolas Maduro, The Wall Street Journal reported.
The report, citing anonymous sources, said Claude was accessed through Anthropic's partnership with Palantir, a contractor for U.S. defense and federal law enforcement agencies.
Anthropic became the first known AI developer whose technology was used in a classified operation by the U.S. Department of Defense, though it remains unclear how the tool was deployed.
The Pentagon was incensed that an Anthropic employee asked a counterpart at Palantir how Claude was used in the raid. Anthropic said the questioning was part of a routine discussion.
"Any use of Claude—whether in the private sector or across government—is required to comply with our Usage Policies, which govern how Claude can be deployed. We work closely with our partners to ensure compliance," an Anthropic spokesperson told The Wall Street Journal.
If the Pentagon follows through on its threat to label Anthropic a "supply chain risk"—a designation normally reserved for foreign actors like Huawei—all Defense Department contractors would be required to either stop doing business with Anthropic or end their ties with the Pentagon.
"It will be an enormous pain in the * to disentangle, and we are going to make sure they pay a price for forcing our hand like this," an anonymous Defense Department official told Axios.
It would be a massive task to offboard Anthropic, which is deeply entrenched in classified systems, and replace it with another AI lab that currently has inferior capabilities, according to the report.
The Pentagon is seeking assurances that it can use software from Anthropic and three other major technology firms, OpenAI, Google and xAI, for "all lawful purposes."
OpenAI, Google and xAI have agreed to lift some internal safeguards if the Pentagon opts to use their AI models, though only for unclassified activities, according to the reports.
Officials described a culture clash between Hegseth's Pentagon and the Silicon Valley firm. Amodei has been vocal about the risks of AI and has positioned Anthropic as a safety-first AI leader.
"The problem with Dario is, with him, it's ideological. We know who we're dealing with," the senior Pentagon official said.
The dispute raises deeper questions about who should ultimately control AI—its creators, its users or the government—a debate not likely to be resolved anytime soon.