By: Emily Nixon
On March 9, Anthropic, an artificial intelligence (AI) company, filed a lawsuit against multiple branches of the United States (U.S.) government when the Pentagon labeled the company as a “supply chain risk,” according to an X post by the Secretary of Defense (War), Pete Hegseth.
Anthropic has worked with government departments since a 2024 merger deal between Anthropic and Palantir, a data analytics company, which split off into a renewed contract with Anthropic in mid-2025. Anthropic is also the first AI company to have a federal contract in the U.S.
“Cloaked in the sanctimonious rhetoric of ‘effective altruism,’ [Anthropic and its CEO] have attempted to strong-arm the United States military into submission – a cowardly act of corporate virtue-signaling that places Silicon Valley ideology above American lives,” stated the post from Hegseth. “Their true objective is unmistakable: to seize veto power over the operational decisions of the United States military.”
The phrase “strong-arm” in the X post refers to Anthropic’s refusal to remove two safeguards in their previously agreed-upon contract with the U.S. Government: No Mass Domestic Surveillance and No Fully Autonomous Weaponry using their systems.
“Using these systems for mass domestic surveillance is incompatible with democratic values. AI-driven mass surveillance presents serious, novel risks to our fundamental liberties,” stated Anthropic in a press release on Feb. 26. “Frontier AI systems are simply not reliable enough to power fully autonomous weapons. We will not knowingly provide a product that puts America’s warfighters and civilians at risk […] In addition, without proper oversight, fully autonomous weapons cannot be relied upon to exercise the critical judgment that our highly trained, professional troops exhibit every day.”
In addition to the label, the Pentagon has effectively blacklisted the company and begun a six-month transition period to replace Anthropic’s AI system, Claude AI, from their systems, according to Hegseth’s post.
“In conjunction with the President’s directive for the Federal Government to cease all use of Anthropic’s technology, I am directing the Department of War to designate Anthropic a Supply-Chain Risk to National Security,” stated the post. “Effective immediately, no contractor, supplier, or partner that does business with the United States military may conduct any commercial activity with Anthropic.”
U.S. President Donald Trump released multiple posts on Truth Social condemning the company for its refusal to give in to the Department of Defense’s demands.
“The Left-wing nut jobs at Anthropic have made a DISASTROUS MISTAKE trying to STRONG-ARM the Department of War, and force them to obey their Terms of Service instead of our Constitution,” stated the post.
“Anthropic better get their act together, and be helpful during this phase-out period, or I will use the Full Power of the Presidency to make them comply, with major civil and criminal consequences to follow. WE will decide the fate of our Country — NOT some out-of-control, Radical Left AI company run by people who have no idea what the real World is all about.”
The designation is projected to be a substantial slash to the company’s finances with both immediate and future effects, according to Reuters.
“Expect the immediate loss of more than $150 million in annual recurring revenue tied to existing and expected Defense Department contracts. […] If defense contractors cut ties, the firm’s expected public sector annual recurring revenue of more than half a billion dollars in 2026 could ‘shrink substantially or disappear altogether,’” Anthropic’s Head of Public Sector Thiyagu Ramasamy told Reuters.
“The government’s actions immediately and irreparably harm Anthropic. The designation also impugns Anthropic’s integrity and reputation as a trusted partner, having a real but incalculable effect on sales to non-governmental customers.”
Anthropic is the first U.S. company to be designated as a supply-chain risk by the Pentagon. The label is usually reserved for U.S. adversaries, like multiple Chinese companies that deal in technology, manufacturing, and shipping, among others.
The designation comes from a “narrow” statute, 10 USC 3252, meant to “protect the government rather than to punish a supplier,” according to the press release from Anthropic on March 5.
The Department of War has stated they will only contract with AI companies that accede to “any lawful use” and remove safeguards in the cases mentioned above. They have threatened to remove us from their systems if we maintain these safeguards; they have also threatened to designate us a “supply chain risk”—a label reserved for US adversaries, never before applied to an American company—and to invoke the Defense Production Act to force the safeguards’ removal,” stated a press release on Feb. 26 from Anthropic. “These latter two threats are inherently contradictory: one labels us a security risk; the other labels Claude as essential to national security.”
Anthropic has stated that this move is meant to intimidate the company and future companies vying for federal contracts.
“We believe this designation would both be legally unsound and set a dangerous precedent for any American company that negotiates with the government. No amount of intimidation or punishment from the Department of War will change our position on mass domestic surveillance or fully autonomous weapons. We will challenge any supply chain risk designation in court,” stated the company’s press release from Feb. 27.
Either way the case gets settled, this lawsuit will have historic results, according to Reuters.
“The fight is seen as a test of the administration’s power over business and whether the government or companies that make AI have the last word on its use.”