AI for Business

Google Staff Push Back on Pentagon AI Deal, Citing Ethical Risks

More than 600 Google employees have signed a letter to CEO Sundar Pichai, urging the company to decline classified military contracts involving its AI models. The petition, first reported by The...

Share:

More than 600 Google employees have signed a letter to CEO Sundar Pichai, urging the company to decline classified military contracts involving its AI models. The petition, first reported by The Washington Post, draws heavily from staff at Google’s DeepMind lab and includes over 20 principals, directors, and vice presidents.

The letter follows a report from The Information that Google is negotiating with the Pentagon to deploy its Gemini AI in classified settings. Employees argue that accepting such work would make it impossible to prevent harmful uses of the technology. “The only way to guarantee that Google does not become associated with such harms is to reject any classified workloads,” the letter states. “Otherwise, such uses may occur without our knowledge or the power to stop them.”

The internal pushback mirrors a broader industry tension. Anthropic is currently locked in a legal dispute with the Pentagon over being labeled a supply chain risk after it refused to loosen AI safety guardrails for military use—a stance that drew support from Google staff and others. Meanwhile, Microsoft already provides AI services in classified environments, and OpenAI revised its own Pentagon agreement in February.

For Google, the decision carries reputational weight. The employee letter signals that a significant faction within the company sees classified military AI as a red line—one that could define the company’s ethical posture for years to come.

Source: The Verge

Ready to Modernize Your Business?

Get your AI automation roadmap in minutes, not months.

Analyze Your Workflows →