New court filing reveals Pentagon told Anthropic the two sides were nearly aligned — a week after Trump declared the relationship kaput

Intelligence report synthesized for precision. Verified source updates below.
Detailed Report
New court filing reveals Pentagon told Anthropic the two sides were nearly aligned — a week after Trump declared the relationship kaput
Connie Loizos
6:40 PM PDT · March 20, 2026
Anthropic submitted two sworn declarations to a California federal court late Friday afternoon, pushing back on the Pentagon’s assertion that the AI company poses an “unacceptable risk to national security” and arguing that the government’s case relies on technical misunderstandings and claims that were never actually raised during the months of negotiations that preceded the dispute.
The declarations were filed alongside Anthropic’s reply brief in its lawsuit against the Department of Defense and come ahead of a hearing this coming Tuesday, March 24, before Judge Rita Lin in San Francisco.
The dispute traces back to late February, when President Trump and Defense Secretary Pete Hegseth publicly declared they were cutting ties with Anthropic after the company refused to allow unrestricted military use of its AI technology.
The two people who submitted the declarations are Sarah Heck, Anthropic’s Head of Policy, and Thiyagu Ramasamy, the company’s Head of Public Sector.
Heck is a former National Security Council official who worked at the White House under the Obama administration before moving to Stripe and then Anthropic, where she runs the company’s government relationships and policy work. She was personally present at the February 24 meeting where CEO Dario Amodei sat down with Defense Secretary Hegseth and the Pentagon’s Under Secretary Emil Michael.
In her declaration, Heck calls out what she describes as a central falsehood in the government’s filings: that Anthropic demanded some kind of approval role over military operations. That claim, she says, simply isn’t true. “At no time during Anthropic’s negotiations with the Department did I or any other Anthropic employee state that the company wanted that kind of role,” she wrote.
She also claims that the Pentagon’s concern about Anthropic potentially disabling or altering its technology mid-operation was never raised during negotiations. Instead, she says, it appeared for the first time in the government’s court filings, which gave Anthropic no opportunity to respond.
Techcrunch event Disrupt 2026: The tech ecosystem, all in one room Your next round. Your next hire. Your next breakout opportunity. Find it at TechCrunch Disrupt 2026, where 10,000+ founders, investors, and tech leaders gather for three days of 250+ tactical sessions, powerful introductions, and market-defining innovation. Register now to save up to $400. Save up to $300 or 30% to TechCrunch Founder Summit 1,000+ founders and investors come together at TechCrunch Founder Summit 2026 for a full day focused on growth, execution, and real-world scaling. Learn from founders and investors who have shaped the industry. Connect with peers navigating similar growth stages. Walk away with tactics you can apply immediatelyOffer ends March 13. San Francisco, CA | October 13-15, 2026
REGISTER NOW
Another detail in Heck’s declaration sure to draw attention is that on March 4 — the day after the Pentagon formally finalized its supply-chain risk designation against Anthropic — Under Secretary Michael emailed Amodei to say the two sides were “very close” on the two issues the government now cites as evidence that Anthropic is a national security threat: its positions on autonomous weapons and mass surveillance of Americans.
The email, which Heck attaches as an exhibit to her declaration, is worth reading alongside what Michael said publicly in the days afterward. On March 5, Amodei published a statement saying the company had been having “productive conversations” with the Pentagon. The day after that, Michael posted on X that “there is no active Department of War negotiation with Anthropic.” A week after that, he told CNBC there was “no chance” of renewed talks.
Heck’s point appears to be: If Anthropic’s stance on those two issues is what makes it a national security threat, why was the Pentagon’s own official saying the two sides were nearly aligned on exactly those issues right after the designation was finalized? (She stops short of saying the government used the designation as a bargaining chip, but the timeline she lays out leaves the question hanging.)
Ramasamy brings a different kind of expertise to the case. Before joining Anthropic in 2025, he spent six years at Amazon Web Services managing AI deployments for government customers, including classified environments. At Anthropic, he’s credited with building the team that brought its Claude models into national security and defense settings, including the $200 million contract with the Pentagon announced last summer.
His declaration takes on the government’s claim that Anthropic could theoretically interfere with military operations by disabling the technology or otherwise altering how it behaves, which Ramasamy says isn’t technically possible. Per his telling, once Claude is deployed inside a government-secured, “air-gapped” system operated by a third-party contractor, Anthropic has no access to it; there is no remote kill switch, no backdoor, and no mechanism to push unauthorized updates. Any kind of “operational veto” is a fiction, he suggests, explaining that a change to the model would require the Pentagon’s explicit approval and action to install.
Anthropic, he says, can’t even see what government users are typing into the system, let alone extract that data.
Ramasamy also disputes the government’s claim that Anthropic’s hiring of foreign nationals makes the company a security risk. He notes that Anthropic employees have undergone U.S. government security clearance vetting — the same background check process required for access to classified information — adding in his declaration that “to my knowledge,” Anthropic is the only AI company where cleared personnel actually built the AI models designed to run in classified environments.
Anthropic’s lawsuit argues that the supply-chain risk designation — the first ever applied to an American company — amounts to government retaliation for the company’s publicly stated views on AI safety, in violation of the First Amendment.
The government, in a 40-page filing earlier this week, rejected that framing entirely, saying that Anthropic’s refusal to allow all lawful military uses of its technology was a business decision, not protected speech, and that the designation was a straightforward national security call and not punishment for the company’s views.
Connie Loizos
Editor in Chief & General Manager
June 9 Boston, MA Actively scaling? Fundraising? Planning your next launch?TechCrunch Founder Summit 2026 delivers tactical playbooks and direct access to 1,000+ founders and investors who are building, backing, and closing.
Most Popular
Employees had to restrain a dancing humanoid robot after it went wild at a California restaurant
Amanda Silberling
Nvidia is quietly building a multibillion-dollar behemoth to rival its chips business
Rebecca Szkutak
Why Garry Tan’s Claude Code setup has gotten so much love, and hate
Julie Bort
Apple quietly launches AirPods Max 2
Aisha Malik
The billionaires made a promise — now some want out
Connie Loizos
US Army announces contract with Anduril worth up to $20B
Anthony Ha
Honda is killing its EVs — and any chance of competing in the future
Tim De Chant



