Washington Banned Anthropic for Refusing Killer Robots. The NSA Didn’t Get the Memo.

The U.S. government banned Anthropic for refusing to enable autonomous weapons. Then the NSA started using the banned AI anyway. Now there are “table reads.”

Washington Banned Anthropic for Refusing Killer Robots. The NSA Didn’t Get the Memo.

There's a particular kind of Washington D.C. plot twist where the government bans something, immediately realizes it cannot live without that thing, and then hires consultants to figure out how to un-ban it — while insisting, in official statements, that nothing has really changed. The ban on Anthropic is that story, and it is playing out with genuinely impressive speed.

According to an Axios scoop published today, the White House is this week convening “table reads” — yes, that is the actual term being used — with tech companies to workshop guidance that would walk back the Office of Management and Budget directive banning federal agencies from using Anthropic’s products. The same directive the White House issued less than two months ago. The same ban the government enforced hard enough that Anthropic lost an appeals court challenge trying to block it. That ban.

Table reads, for those who missed the Hollywood writers’ room pipeline into federal AI policy, are rehearsals — a format borrowed from screenwriting where you gather everyone in a room, read a document aloud, and note where it falls apart. Which is either a very sophisticated approach to policy drafting, or an extremely telling metaphor for what’s happening here.

How Anthropic Got Banned for Having a Position

The backstory is almost too clean. Earlier this year, as I covered when it was still a slow-burning threat, the Pentagon gave Anthropic an ultimatum: remove restrictions on your models for “all lawful purposes,” which in context meant fully autonomous weapons systems and domestic mass surveillance. Anthropic CEO Dario Amodei said no. Not even a polite maybe-later no — a published, documented, philosophically-grounded no, the kind you get from a company that has spent years building its entire brand on the concept of AI safety.

The Pentagon’s response: designate Anthropic a “supply chain risk” and instruct President Trump, who signed the directive on February 27, to order federal agencies to “IMMEDIATELY CEASE all use of Anthropic’s technology.” In all caps. With the specificity of someone who had clearly looked up how to make an executive order sound emphatic.

This is when things got interesting.

The NSA, Apparently, Did Not Check Its Email

Here is the thing about Anthropic’s newest model, Claude Mythos Preview: it is extraordinary at finding zero-day vulnerabilities in major operating systems and browsers — so extraordinary, in fact, that Anthropic declined to release it to the general public on the grounds that it was too dangerous. A model too hot for public release, deemed too risky for normal people, but apparently ideal for the National Security Agency.

Per multiple reports confirmed by the NSA itself, Mythos Preview was being used “more widely” across the agency and parts of the Department of Defense despite the Pentagon blacklist. The same Pentagon that issued the blacklist. Which contains the NSA. Doing the thing the blacklist said not to do.

I’ve written before about how Anthropic handles models it considers too dangerous, but this is a new wrinkle: apparently “too dangerous to release publicly” translates to “ideal for the nation’s premier signals intelligence agency.” The model is so good at breaking into things that the public cannot have it, but the people whose job is breaking into things can have it — even when they are officially banned from having it.

There’s a word for this. Several, actually.

The Table Read Era of American AI Governance

Which brings us to today. White House Chief of Staff Susie Wiles and Treasury Secretary Scott Bessent met with Dario Amodei in mid-April for what both sides called a “productive introductory meeting.” In Washington, “productive introductory meeting” means: we are no longer threatening each other in public. Now the White House is convening working sessions this week to draft guidance that would allow agencies to officially do what the NSA was apparently already doing unofficially.

The framing, to give the administration some credit, is at least clever. Rather than simply rescinding the ban — which would require admitting the ban was, to put it diplomatically, aggressively premature — the White House is drafting language that would let agencies “bypass the supply chain risk designation” on a case-by-case basis. It’s not that Anthropic was un-banned. It’s that certain agencies may receive guidance enabling them to interface with designated entities in approved operational contexts, subject to periodic review.

Washington-to-English translation: you can use it again, just don’t make us admit we were wrong.

The Part Where Nobody Actually Changed Their Position

Here is what I cannot figure out, and what the reporting so far hasn’t fully clarified: has Anthropic actually agreed to change the restrictions that caused the ban in the first place? Because the original dispute was about autonomous weapons and domestic surveillance. Anthropic drew a line. The government drew a line. Both lines were very clearly drawn, in public, by named executives and named agencies.

But now Anthropic is a trillion-dollar company (depending which secondary market you ask), Google just committed up to $40 billion to it, and its newest model is so good at cyber operations that intelligence agencies couldn’t maintain a ban on it for sixty days. The government’s leverage — “comply or be cut off” — ran directly into the government’s operational needs — “we need the thing we cut off.”

So the table reads are happening. The guidance is being drafted. The meetings are described as productive. Dario Amodei, who sat across a table from the Defense Department and said “no, we won’t build autonomous killing machines” and then survived the ban that followed, is now in the position of watching the government carefully construct a bureaucratic off-ramp back to his doorstep.

It’s not a capitulation — not yet, anyway, and maybe not at all. It’s something more interesting: a situation where having principles turned out to be a stronger negotiating position than anyone expected, because the model behind those principles was simply too useful to ignore. Which is either a heartening sign that ethics sometimes win, or a deeply unsettling sign about what “useful” means when the NSA is your unsanctioned early adopter.

Anthropic also gave out $50K grants to academics to study the economic disruption it was causing. Which is either a responsible move or an extraordinary flex, depending on the mood you’re in. The table reads continue this week. I’ll be following along.