Close
newsletters Newsletters
X Instagram Youtube

OpenAI robotics chief quits over Pentagon AI deal

Close-Up of OpenAI Logo on Website, Aug 14th, 2025 (Adobe Stock Photo)
Photo
BigPhoto
Close-Up of OpenAI Logo on Website, Aug 14th, 2025 (Adobe Stock Photo)
March 08, 2026 04:36 PM GMT+03:00

OpenAI’s head of robotics resigned on Saturday, citing concerns that the company’s agreement with the United States Department of War was rushed and lacked necessary safeguards for deploying artificial intelligence in military settings.

Caitlin Kalinowski, who joined OpenAI in November 2024 after leading augmented reality development at Meta, announced her departure on X.

She oversaw OpenAI’s robotics division and consumer hardware initiatives, including a project to train a robotic arm for household tasks at the San Francisco lab.

“AI has an important role in national security,” Kalinowski wrote. “But surveillance of Americans without judicial oversight and lethal autonomy without human authorization are lines that deserved more deliberation than they got.”

She stated her motivation clearly, writing: “This was about principle, not people.” In a follow-up post, she emphasized that her primary concern was procedural.

“To be clear, my issue is that the announcement was rushed without the guardrails defined. It’s a governance concern first and foremost. These are too important for deals or announcements to be rushed.”

The Pentagon deal and its origins

OpenAI’s contract with the Department of War, announced in late February, grants the military access to its AI models on classified cloud networks. The agreement enables the technology to support cybersecurity, intelligence analysis, and logistics.

The agreement followed Anthropic’s refusal to accept similar terms just hours earlier. Anthropic requested two conditions: a ban on using its models for fully autonomous weapons and a prohibition on mass domestic surveillance of American citizens.

The Pentagon required unconditional use of the technology for “all lawful purposes.”

When Anthropic declined, Defense Secretary Pete Hegseth designated the company a supply-chain risk to national security, a classification usually reserved for firms from adversarial nations.

OpenAI acted quickly to secure the agreement. CEO Sam Altman confirmed the deal and stated that technical safeguards and contract language were included to prevent domestic mass surveillance and the development of autonomous weapons. The company also requested that the government apply similar terms to all AI companies.

Altman later admitted the rollout was poorly managed. He told employees the announcement appeared “opportunistic” and that the company should not have rushed to publicize it.

Dissent inside and outside OpenAI

Kalinowski’s departure is the most senior public act of dissent since the deal’s announcement, but not the only one. Some OpenAI employees expressed concern internally about accepting terms that Anthropic had rejected. Additionally, more than 330 employees from OpenAI and Google signed an open letter in support of Anthropic’s position.

Public reaction has been significant. ChatGPT usage increased by 295% in the days after the announcement. Anthropic’s Claude app rose to No. 1 on Apple’s App Store in the United States, replacing ChatGPT, which had led for most of February.

Anthropic reported its free user base grew by over 60% since January, daily sign-ups tripled since November, and paying subscribers more than doubled in the same period.

What is at stake?

The dispute raises important questions beyond corporate rivalry. Central to the debate is whether AI companies should set ethical limits on military use of their technology and who should have the authority to define those limits.

Kalinowski’s concerns highlight two risks that AI governance experts have long identified. The first is lethal autonomy, where AI systems could make life-or-death decisions in combat without meaningful human oversight. The second is domestic surveillance, which involves using AI to monitor civilians at scale without judicial authorization.

OpenAI says both are prohibited under its current contract. The company reiterated on Saturday that its “red lines” remain in place.

“We recognize that people have strong views about these issues, and we will continue to engage in discussion with employees, government, civil society, and communities around the world,” a company spokesperson said.

Whether these assurances are sufficient and whether the supporting governance structures are robust remain the central questions raised by Kalinowski’s resignation.

March 08, 2026 04:36 PM GMT+03:00
More From Türkiye Today