Close
newsletters Newsletters
X Instagram Youtube

How Pentagon turns Claude into America's most downloaded app

Claude by Anthropic App Store listing on iPhone highlights, Stafford, United Kingdom, Aug 7, 2024 (Adobe Stock Photo)
Photo
BigPhoto
Claude by Anthropic App Store listing on iPhone highlights, Stafford, United Kingdom, Aug 7, 2024 (Adobe Stock Photo)
March 01, 2026 05:22 PM GMT+03:00

The Trump administration has ordered all federal agencies to stop using technology from Anthropic, the artificial intelligence company behind the Claude AI assistant, following a weeks-long dispute over the terms of a U.S. Defense Department contract.

The directive, issued by President Donald Trump on Friday, requires a six-month phaseout of Anthropic’s products across the federal government. Hours later, rival company OpenAI confirmed it had reached a separate agreement with the Pentagon to deploy its own AI systems in classified environments.

The conflict stems from a $200 million contract Anthropic signed with the Department of Defense in July, which made the company the first AI laboratory to integrate its models into mission workflows on classified networks.

Negotiations broke down over two conditions Anthropic sought to include in the agreement: a prohibition on using its models to power fully autonomous weapons and a ban on deploying them for the mass domestic surveillance of American citizens.

The Pentagon demanded that Anthropic agree to allow its models to be used for “all lawful purposes” without restriction. Defense Secretary Pete Hegseth set a 5:01 p.m. ET deadline on Friday for Anthropic to comply.

When the company refused, Hegseth announced that Anthropic would be formally designated a “supply-chain risk to national security," a classification typically reserved for companies from adversarial nations, effectively barring U.S. military contractors and suppliers from working with the company.

Anthropic refuses to back down

Anthropic CEO Dario Amodei said the company “cannot in good conscience” accept the Pentagon’s terms.

In a public statement, he explained two main concerns: current AI technology is not reliable enough to safely run fully autonomous weapons, and mass domestic surveillance would violate fundamental rights. He also said that, to the company's knowledge, its usage restrictions have not interfered with any government mission so far.

“These threats do not change our position,” Amodei wrote. “We have tried in good faith to reach an agreement with the Pentagon over months of negotiations.”

Anthropic said it plans to challenge the supply chain risk label in court, calling it “legally unsound” and warning that it sets “a dangerous precedent for any American company that negotiates with the government.”

The company also argued that, under federal law, the designation should only apply to its work on Department of War contracts and should not limit how defense contractors use Claude for other clients.

Independent experts say this standoff is rare in Pentagon contracting. Jerry McGinn, director of the Center for the Industrial Base at the Center for Strategic and International Studies, pointed out that defense contractors usually do not negotiate with the military on how the military uses their products. “This is a very unusual, very public fight,” he said. “I think it’s reflective of the nature of AI.”

OpenAI Steps In with its own Pentagon deal

OpenAI quickly stepped in after Anthropic was excluded. CEO Sam Altman announced on Friday evening that OpenAI had reached a deal with the Pentagon to provide advanced AI systems for classified use and shared the details publicly.

The agreement includes many of the same safeguards Anthropic wanted, such as bans on domestic mass surveillance and requirements for human oversight in decisions about autonomous weapons.

OpenAI said it asked the government to offer these terms to all AI companies and hopes Anthropic will be included again. “No, we do not think Anthropic should be designated a supply-chain risk,” the company wrote in a public FAQ.

OpenAI described its deployment as cloud-only, with cleared company personnel involved in operations, and said it retains full control over its safety systems.

The company added that its contract explicitly references existing surveillance and autonomous weapons laws, locking in current legal standards even if federal policy changes in the future.

Claude climbs to No. 1

The dispute has had a notable effect on public sentiment. Following the ban, Anthropic’s Claude app climbed to the top of Apple’s App Store rankings in the United States, surpassing OpenAI’s ChatGPT, which had held the top spot for most of February.

According to analytics firm Sensor Tower, the Claude iOS app had ranked as low as 131st in the U.S. as recently as Jan. 30. Anthropic said its free user base has grown by over 60 percent since January, daily sign-ups have tripled since November, and paying subscribers have more than doubled in the same period.

The tech industry also reacted publicly to the news, with over 330 employees from Google and OpenAI signing an open letter in support of Anthropic’s stance.

Lauren Kahn, a senior research analyst at Georgetown’s Center for Security and Emerging Technology, warned that the administration’s confrontational approach could have long-term consequences for the Pentagon’s relationship with the private sector.

“I’m truly worried that private companies will say it’s not worth their time to work with the defense sector moving forward,” she said.

The outcome of Anthropic’s planned legal challenge, as well as any prospects for further negotiations between the company and the federal government, remains uncertain.

March 01, 2026 05:24 PM GMT+03:00
More From Türkiye Today