Anthropic PBC has been formally notified by the Pentagon that its AI products pose a risk to the U.S. supply chain, according to a senior defense official who spoke to Bloomberg.
The official confirmed that the company and its products have been classified as a supply chain risk, effective immediately, though the specifics of the notification's timing and method remain unclear.
This marks a sharp escalation in the ongoing clash between the Trump administration and Anthropic. The company has been under fire for its AI models being used by the Pentagon. Tensions, building for months, stem from concerns about national security and Trump's Defense Department wanting unrestricted military use of AI.
Dario Amodei, CEO of Anthropic, engaged in weeks of discussions with Emil Michael, the under-secretary of defense for research and engineering, to establish a contract detailing how the Pentagon could utilize Anthropic's technology.
"We believe this designation would both be legally unsound and set a dangerous precedent for any American company that negotiates with the government. No amount of intimidation or punishment from the Department of War will change our position on mass domestic surveillance or fully autonomous weapons. We will challenge any supply chain risk designation in court," Amodei wrote in a press release on February 27.
It is unclear what the next steps may be for both the company and the military.
U.S. Central Command reportedly used Anthropic's Claude AI during the Trump administration's major air operation against Iran, hours after the president ordered federal agencies to stop using Anthropic's technology.
The military also used Claude in the mission that captured Venezuelan President Nicolas Maduro.
Anthropic's Claude Gov platform was the only AI system capable of running in the Pentagon's classified cloud. The platform has been gaining popularity in the defense department given its user-friendly design.
Anthropic did not respond to a request for comment.
