Inside the Letter Shaking Google's AI Division
A significant internal revolt is brewing at Google. More than 600 employees — including over 20 principals, directors, and vice presidents — have signed an open letter to CEO Sundar Pichai demanding that the company refuse to allow the Pentagon to use its AI models for classified military applications.
First reported by The Washington Post, the letter warns that allowing classified military workloads would create a situation where harmful uses could occur "without our knowledge or the power to stop them." Its signatories argue that the only way Google can guarantee it isn't implicated in such harms is to reject classified contracts outright.
The Stakes for AI Ethics in Silicon Valley
The push comes at a pivotal moment for Big Tech's relationship with the US military. Google famously cancelled its Project Maven contract in 2018 after a similar employee revolt — that project involved using AI to analyze drone footage for the Pentagon. The company later updated its AI principles to include restrictions on weapons applications, but critics say those guardrails have gradually eroded as defence contracts have become more lucrative.
The current letter reflects a renewed anxiety among researchers and engineers who fear that AI capabilities have advanced so dramatically that the potential for harm is orders of magnitude greater than it was even a few years ago. Large language models and multimodal AI systems, unlike narrow image-classification tools, can be adapted to a vast range of tasks — including ones their original developers never intended or sanctioned.
Anthropic Also Caught in the Crossfire
Google isn't alone in facing scrutiny over military AI. Anthropic — maker of the Claude AI assistant — is currently involved in a legal dispute with the Pentagon over similar questions about the scope of classified use. The legal battle underscores how the entire AI industry is being pulled toward national security applications even as many of the engineers who build these systems express discomfort with that direction.
What Google Has to Say
As of the time of reporting, Google had not publicly responded to the letter's demands. The company has previously defended its work with government clients as consistent with its AI principles, arguing that it applies careful review processes before entering defence-related agreements.
But for the hundreds of employees who signed on, that process clearly doesn't feel sufficient. Many of them work in DeepMind, the AI research division Google acquired in 2014, which has historically maintained a culture of academic openness and ethical caution. Seeing that lab's resources potentially funnelled into classified weapons-adjacent work appears to be a line a significant number of researchers are unwilling to accept quietly.
A Broader Question for the Industry
The Google letter adds to a growing body of evidence that the AI industry's internal ethics debates are far from settled — and that employee voice still carries weight, at least enough to make headlines. Whether Pichai and Google's leadership will take meaningful action remains to be seen, but the pushback signals that the tension between Silicon Valley's commercial ambitions and its workforce's values is very much alive.
As governments race to integrate AI into defence and intelligence infrastructure, expect this debate to intensify — inside tech companies, in courtrooms, and in public discourse.
Source: The Verge / The Washington Post
