The Pentagon’s AI Ultimatum: What It Means for Privacy


HYDRA Has a Server Farm: AI, the Pentagon, and What We Should All Be Watching

There is a scene near the end of Captain America: The Winter Soldier where Steve Rogers stands in front of S.H.I.E.L.D. headquarters and tells a room full of agents that the organization they have served has been secretly building a system designed to eliminate threats before they happen; not crimes that were committed, but people identified by an algorithm as likely future problems. Three massive helicarriers, locked and loaded, pointed at citizens. The twist, of course, is that HYDRA had been inside S.H.I.E.L.D. the whole time, slowly reshaping it from within.

I thought about that movie a lot this week.

If you have not been following the story unfolding between the Trump administration and Anthropic (the company that makes Claude, the AI tool many of us use in our classrooms), here is what happened, and why educators who care about constitutional rights and the future our students are inheriting should pay close attention.

The Background

In July 2025, the Pentagon awarded contracts worth up to $200 million each to four AI companies: Anthropic, OpenAI, Google DeepMind, and Elon Musk’s xAI. The goal was to accelerate what the Department of Defense was calling an “AI-first” military transformation. Of those four companies, Anthropic’s Claude was the only AI model cleared for use on classified military networks, making it the most deeply embedded in sensitive national security operations. That distinction would become the center of everything that followed.

Then, on January 3, 2026, U.S. special operations forces conducted a raid in Caracas, Venezuela, capturing president Nicolás Maduro. It was a stunning and controversial operation. Reports later confirmed that Claude was used during that mission, deployed through Anthropic’s partnership with data company Palantir Technologies. Here is the part that tells you something important: Anthropic found out from the press. An employee reportedly reached out to a Palantir contact to ask how their model had been used. The government had not told them.

The Ultimatum

Defense Secretary Pete Hegseth summoned Anthropic CEO Dario Amodei to the Pentagon on Tuesday, February 24, and delivered an ultimatum: give the Defense Department full, unrestricted access to Claude for “all lawful purposes” by 5:01 PM Friday, or face consequences. The Pentagon wanted two specific guardrails removed: the prohibition on using Claude in fully autonomous lethal weapons, and the prohibition on using it for mass domestic surveillance of American citizens.

Amodei said no.

His statement was direct: “We cannot in good conscience accede to their request.” He added that the company had “tried in good faith to reach an agreement with the Department of War, making clear that we support all lawful uses of AI for national security aside from the two narrow exceptions above.”

The Pentagon’s response escalated immediately. Hegseth accused Anthropic of trying to “seize veto power over the operational decisions of the United States military” and said the company had “delivered a master class in arrogance and betrayal.” Trump posted on Truth Social calling Anthropic “radical left, woke” and ordered every federal agency to immediately cease all use of Anthropic’s technology. Hegseth then designated Anthropic a “supply chain risk to national security,” a label previously reserved for companies considered extensions of foreign adversaries, and announced that any defense contractor doing business with the U.S. military was now barred from working with Anthropic.

As Gene Hackman’s NSA chief in Enemy of the State liked to say: when the government decides you are a threat, the full weight of the apparatus comes down fast.


Interview CBS News from Feb 27, 2026

The Fourth Amendment Problem Nobody Is Talking About Enough

Here is where the story gets less like a political skirmish and more like the kind of constitutional question we should be teaching our students to sit with seriously.

Amodei’s concern about surveillance is not abstract hand-wringing. It is grounded in a specific and troubling gap in how American law was written. Under current legal frameworks, the government can legally purchase enormous amounts of data about American citizens from commercial sources without a warrant: browsing history, location data, purchasing records, social associations. Individually, each piece of that data is considered innocuous and does not trigger Fourth Amendment protections under the “third-party doctrine,” the legal principle that says you have no reasonable expectation of privacy in information you have voluntarily shared with others.

The problem is that AI changes the math entirely. As Amodei wrote in Anthropic’s official statement, powerful AI makes it possible to “assemble this scattered, individually innocuous data into a comprehensive picture of any person’s life, automatically and at massive scale.” What was once practically impossible (building a comprehensive profile of every American) becomes trivially easy. The Fourth Amendment, written when a “search” meant a constable physically rifling through your papers, simply did not anticipate this. Amodei has argued that the law has not caught up with what AI can now do and that new legislation, or possibly a constitutional amendment, may be necessary to close that gap before the infrastructure for a surveillance state is built and normalized.

This is not a science fiction scenario. In Enemy of the State, Will Smith’s character spends the entire film trying to outrun a surveillance apparatus that can track him through cameras, financial records, phone signals, and satellite imagery. When that film came out in 1998, it felt like a paranoid thriller. In 2026, the technology it depicted is not only real but routine; and AI means it scales to every person simultaneously, not just targets the government has already identified.

The Winter Soldier parallel is equally apt. HYDRA’s Project Insight was not just about killing bad people; it was about identifying and eliminating anyone whose future actions might pose a threat to order, as determined by an algorithm trained on the government’s own definitions of danger. The horror of the film is not the helicarriers. It is that the system had already decided who deserved to die before anyone pulled a trigger.

The Precision of the Two Red Lines

It is worth being precise about what Amodei actually refused, because the administration’s framing obscured it. His objection to autonomous weapons is not categorical. He told CBS News clearly: “We are not categorically against fully autonomous weapons. We simply believe that the reliability is not there yet, and that we need to have a conversation about oversight.” He offered to work with the Pentagon to prototype and test these systems in a controlled environment. The Pentagon declined, insisting on unrestricted access from the start.

His objection to mass domestic surveillance, by contrast, is categorical, and he frames it in explicitly constitutional terms. “Domestic mass surveillance does not help the U.S. catch up with its adversaries,” he said. “Domestic mass surveillance is an abuse of the government’s authority, even where it is technically legal.”

The Pentagon’s position, as stated publicly by spokesman Sean Parnell, is that the department has “no interest in mass surveillance of Americans” and that “legality is the Pentagon’s responsibility as the end user.” In other words: trust us, we will only do legal things. The disagreement is fundamentally about whether a private company has any standing to build restrictions into a government contract, or whether the government’s self-certification of lawfulness is sufficient protection for citizens.


Interview with 60 minutes from November 16, 2025

What the Other Companies Actually Did

Here is a correction to a lot of the social media reporting circulating this week: the framing that “every other AI company gave the Pentagon what it wanted” is misleading. Hours after Trump’s announcement, OpenAI CEO Sam Altman announced a new deal with the Pentagon for classified network access; but he also stated explicitly that the same two red lines Anthropic held were enshrined in OpenAI’s agreement. “Two of our most important safety principles are prohibitions on domestic mass surveillance and human responsibility for the use of force, including for autonomous weapon systems,” Altman wrote, adding that the Department of Defense “agrees with these principles, reflects them in law and policy, and we put them into our agreement.”

This raises an obvious and uncomfortable question: if OpenAI negotiated the same protections and the Pentagon accepted them, why did Anthropic face a federal blacklist for asking for the same thing? Senator Mark Warner, vice chair of the Senate Select Committee on Intelligence, raised this directly, noting that the administration’s actions raise “serious concerns about whether national security decisions are being driven by careful analysis or political considerations,” and suggesting the dispute may amount to steering contracts toward a preferred vendor.

That is a serious allegation. It deserves the same careful attention we would ask our students to bring to any claim about government motivation.

The Defense Production Act and an Unprecedented Threat

The Pentagon’s most dramatic threat was invoking the Defense Production Act, a Korean War-era law designed to compel factories to shift production during national emergencies. Legal experts across the political spectrum noted that using it to force a software company to delete safety restrictions from its code would be unprecedented and likely unconstitutional. Senators Elizabeth Warren and Andy Kim argued the move would “shatter the bipartisan consensus in support of a strong DPA,” weakening American manufacturing competitiveness. In the end, the administration settled for the supply chain risk designation and the government-wide ban rather than invoking the DPA directly; but the threat alone sent a clear message to every AI company watching.

What This Means for Classrooms

I want to be transparent: I use Claude regularly, for lesson planning, curriculum development, and occasionally as a model for students learning to work with AI tools responsibly. The consumer product we use is not affected by this dispute; these negotiations concern classified military and intelligence applications, not educational platforms.

But the underlying questions are absolutely relevant to what we teach. When we cover the Fourth Amendment, we are now teaching a provision that was written for a world where surveillance required enormous resources and human effort to execute. AI makes mass surveillance cheap, fast, and scalable in ways the framers could not have imagined. When we cover separation of powers, we can point students to a live, real-time case where a congressional mandate (the FY2026 NDAA requires ethical standards frameworks for DoD AI) is being tested against executive branch pressure to move faster than those frameworks allow. When we cover corporate power and government authority, this story gives students a genuine, unresolved case study about who gets to set limits on what the most powerful tools in human history can do.

The most useful lesson from The Winter Soldier is not that HYDRA existed. It is that the structures designed to protect people can be hollowed out gradually, with each individual compromise seeming defensible in isolation. Steve Rogers does not defeat HYDRA by being stronger. He does it by refusing to accept the premise that the mission justifies removing the guardrails.

Our students will inherit whatever precedent gets set here. They should understand what is being argued, what is at stake, and why the people drawing these lines believe those lines matter.

That is exactly the kind of conversation worth having in any classroom that takes civics seriously.


,