Close
newsletters Newsletters
X Instagram Youtube

Google signs secret AI deal with US military

An AI-generated composite illustration shows the Pentagon and a Google office building. (Photo generated by Gemini)
Photo
BigPhoto
An AI-generated composite illustration shows the Pentagon and a Google office building. (Photo generated by Gemini)
April 28, 2026 03:38 PM GMT+03:00

Google has signed a classified artificial intelligence agreement with the U.S. Department of War, joining more technology companies that supply AI models for sensitive military operations.

The agreement lets the Pentagon use Google's AI for "any lawful government purpose," according to media reports. OpenAI and Elon Musk's xAI have made similar deals to provide AI models for classified use.

Classified networks are used for many sensitive tasks, such as mission planning and weapons targeting.

The Pentagon in Washington, U.S., is seen from aboard Air Force One, March 29, 2018. (Reuters Photo)
The Pentagon in Washington, U.S., is seen from aboard Air Force One, March 29, 2018. (Reuters Photo)

Pentagon contracts and AI expansion

The Department of War had already signed agreements worth up to $200 million each with major AI labs in 2025, including Anthropic, OpenAI, and Google.

The Pentagon says it does not want to use AI for mass surveillance of Americans or to create weapons that work without human involvement. However, it wants to allow "any lawful use" of AI on both classified and unclassified networks.

Pentagon Chief Technology Officer Emil Michael told tech executives at a White House event in February that the military wants to make AI models available on both unclassified and classified domains, according to two people familiar with the matter.

"We believe that providing application programming interface (API) access to our commercial models, including on Google infrastructure, with industry-standard practices and terms, represents a responsible approach to supporting national security," a Google spokesperson told Reuters.

A technician uses a tablet while inspecting server racks at a data center. (Adobe Stock Photo)
A technician uses a tablet while inspecting server racks at a data center. (Adobe Stock Photo)

Safety language and limits on oversight

Google's agreement requires the company to adjust its AI safety settings and filters at the government's request.

The contract includes language stating that the AI system "is not intended for, and should not be used for, domestic mass surveillance or autonomous weapons (including target selection) without appropriate human oversight and control," the reports stated.

However, the agreement also states that Google does not have the right to control or veto lawful government decisions regarding operations.

Google said it works with government agencies on both classified and non-classified projects.

A company spokesperson said Google is still committed to the idea that AI should not be used for domestic mass surveillance or autonomous weapons without proper human oversight.

The Department of War declined to comment on the matter.

The Google logo is displayed on the exterior of a company building (Adobe Stock Photo)
The Google logo is displayed on the exterior of a company building (Adobe Stock Photo)

Employee pushback

The deal has faced internal opposition. More than 600 Google employees, many from its DeepMind AI lab, signed an open letter to Chief Executive Sundar Pichai, asking him to refuse any Pentagon agreement that allows classified use of the company's AI.

"We want to see AI benefit humanity, not used in inhumane or extremely harmful ways," the employees wrote in the letter, citing concerns over lethal autonomous weapons and mass surveillance.

The employees argued that classified deals would stop Google representatives from knowing how the company's technology is used. They said that refusing this work is the only way to ensure the technology is not linked to harm.

Google did not respond to a request for comment on the letter.

The Claude logo is displayed on a smartphone screen with the Anthropic wordmark in the background (Adobe Stock Photo)
The Claude logo is displayed on a smartphone screen with the Anthropic wordmark in the background (Adobe Stock Photo)

Anthropic precedent

The dispute at Google comes after a more heated situation involving Anthropic, the company behind the Claude AI chatbot.

The Pentagon labeled the startup a supply-chain risk in March after it refused to remove restrictions on autonomous weapons and domestic surveillance from its contract.

Trump ordered all federal agencies to halt the use of Anthropic's technology in February. Anthropic subsequently filed two lawsuits against the U.S. government, alleging unlawful retaliation over its AI safety positions.

In 2025, Google changed its AI use policies and removed earlier limits on using AI for weapons and surveillance.

The company has been seeking more military contracts in recent years. This is different from its 2018 position, when it chose not to renew a Pentagon contract after employees protested the work on drone imagery recognition.

April 28, 2026 03:38 PM GMT+03:00
More From Türkiye Today