AI for Business

Pentagon Labels Anthropic a Security Risk, Sparking Legal Threat and Industry Confusion

The Department of Defense moved Friday to designate AI developer Anthropic as a supply-chain risk, a decision the company immediately vowed to fight in court. The move, ordered by Defense...

Share:

The Department of Defense moved Friday to designate AI developer Anthropic as a supply-chain risk, a decision the company immediately vowed to fight in court. The move, ordered by Defense Secretary Pete Hegseth, sent immediate ripples of uncertainty through the technology and defense sectors.

In a social media statement, Hegseth declared that no Pentagon contractor or partner may conduct commercial activity with Anthropic. The designation stems from a breakdown in negotiations over the military's use of Anthropic's AI models. Anthropic had publicly insisted any contracts prohibit use for mass domestic surveillance or fully autonomous weapons, while the Pentagon sought agreement for "all lawful uses" without specific carve-outs.

A supply-chain risk label allows the Pentagon to exclude vendors deemed security vulnerabilities. Anthropic responded hours later, calling the action a "dangerous precedent" for companies negotiating with the government and challenging its legal basis. "Secretary Hegseth does not have the statutory authority to back up this statement," the company wrote.

The announcement left major military contractors and tech partners—including Amazon, Microsoft, Google, and Nvidia—in a bind, with several declining to comment. Legal experts say it's currently unclear which, if any, companies must sever ties. "This is not mired in any law we can divine right now," said Alex Major, a partner at law firm McCarter & English.

Some observers see broader repercussions. "The Defense Department just sent a huge message to every company," said Greg Allen of the Center for Strategic and International Studies. The dispute emerges as OpenAI, a key Anthropic competitor, announced a separate agreement with the Pentagon for deploying its models in classified settings, with stated prohibitions on mass surveillance and autonomous weapons.

With Anthropic promising a lawsuit, a resolution could take years, potentially damaging its business in the interim while testing the limits of the government's authority to regulate emerging technologies.

Source: Wired

Ready to Modernize Your Business?

Get your AI automation roadmap in minutes, not months.

Analyze Your Workflows →