Elon Musk Called Anthropic Evil — Then Leased Them His Entire Supercomputer

In February, Musk called Anthropic “misanthropic and evil.” In May, he handed them 220,000 GPUs, cleared them on his evil detector, and signed a receipt.

Elon Musk Called Anthropic Evil — Then Leased Them His Entire Supercomputer

I just encountered the most Silicon Valley sentence of 2026: “No one set off my evil detector.”

That’s Elon Musk. On X. About Anthropic — the company he publicly declared “misanthropic and evil” roughly ninety days ago. The company he said “hates Western civilization.” The company he called hypocritical, which, in a landscape where hypocrisy is essentially the default mode of industry operation, was intended as a devastating insult.

That same company just signed a deal with Elon Musk’s SpaceX to use all of Colossus 1 — the Memphis supercomputer with more than 220,000 Nvidia GPUs. All of them. 300 megawatts of power. For Anthropic. Claude. The misanthropic, civilization-hating AI that now runs on Elon Musk’s hardware, courtesy of Elon Musk, who says he met with the team and came away impressed.

I’ve been covering tech absurdism long enough that very little stops me mid-sentence. This stopped me mid-sentence.

A Brief History of Musk vs. Anthropic (Runtime: Q1 2026)

Let’s do a quick recap. In February, Musk went on X and wrote that Anthropic “hates Western civilization.” He questioned whether there was “a more hypocritical company than Anthropic.” He called the company’s AI “misanthropic and evil.” These are not light criticisms. These are the kinds of things you say right before you never do business with someone.

Unless you’re in tech. In which case these are the kinds of things you say right before you become their compute provider.

What happened in between? Musk spent time with senior Anthropic team members last week. He came away “impressed.” “Everyone I met was highly competent and cared a great deal about doing the right thing,” he wrote. “No one set off my evil detector.” He concluded: “So long as they engage in critical self-examination, Claude will probably be good.”

I want to be precise about the timeline here. In February: civilization-destroyer. In May: probably fine, good vibes, competent people. That’s three months. From “hates Western civilization” to “cleared by evil detector” — in a single fiscal quarter. I have seen many things in this industry, but I have never seen a redemption arc with this turnaround speed and this much GPU throughput at the end of it.

In fairness to Musk, people change their minds. Also, 300 megawatts doesn’t lease itself.

The Business Logic Is Actually Kind of Elegant (Don’t Tell Anyone)

Here’s what makes this deal less insane than it sounds — and somehow more insane once you understand it.

Earlier this year, SpaceX and xAI merged. This made Colossus 1 a SpaceX asset. xAI is already building Colossus 2, a newer, bigger facility. Which means Colossus 1 — 220,000 GPUs, featuring H100s, H200s, and the newer GB200 accelerators, the machine Musk himself called “the best AI computer” — is sitting there, partially idle, as xAI’s development shifts to its successor.

Anthropic, meanwhile, is sprinting toward an IPO expected next month. It just surpassed OpenAI in annualized revenue — $30 billion run-rate to OpenAI’s $24 billion — and its demand for compute is growing faster than its existing contracts can handle. Claude Code just became a critical enterprise product. The rate limits are the bottleneck. The bottleneck costs customers.

So: one company has compute it isn’t fully using. One company desperately needs compute. The companies also share a common rival in OpenAI and Sam Altman, with whom Musk has a long-running and very public dispute. The deal lets Musk earn revenue from Colossus 1 before SpaceX’s own IPO. It accelerates Anthropic’s capacity without a new data center build. And — this part appears to be a bonus — it irritates Altman.

It’s almost clean. If you ignore the part where it started with “hates Western civilization.”

220,000 GPUs in Memphis. And Then, Possibly, Space.

The terms are worth sitting with for a moment. Anthropic isn’t renting a slice of Colossus 1. They’re getting all of it. The entire facility. 300 megawatts of Memphis real estate, now occupied by the company its new landlord called civilization-averse less than three months ago.

There’s also a rider. Anthropic has “expressed interest” in developing gigawatts of compute capacity in space. With SpaceX. SpaceX — the rocket company, the actual one that launches things into orbit. I want to be precise: this is a real sentence from a real business agreement. The company whose AI helps you write emails is now interested in orbital data centers. With the company whose owner spent February calling them evil.

I’ve written about the Mac Minis and vibes era of AI infrastructure. We have entered something different. We are now in the “220,000 GPUs and also maybe space” era. Adjust your expectations accordingly.

“No One Set Off My Evil Detector” and Other Reassuring Clauses

I keep returning to that quote. Not because it’s unprecedented — Musk has a long history of maximalist public opinions followed by pragmatic reversals — but because of the specificity. Evil detector. That’s the instrument. That’s what cleared Anthropic.

To be clear: I’m not a contract lawyer, but I feel reasonably confident that “no one set off my evil detector” is not a standard clause in compute partnership agreements. It’s also not the line you lead with after calling a company misanthropic. It’s the line that suggests Musk arrived at an Anthropic meeting, spent several days with the team, found no one particularly evil, and proceeded to hand them the largest cluster of AI hardware his company currently owns.

This is, genuinely, how some of the biggest deals get made. I’ve covered Anthropic’s enterprise pivot with some skepticism — the company built to save humanity has spent considerable time optimizing for CFO workflows — but I will give them this: they appear to be extraordinarily good at hosting meetings.

What Claude Users Actually Get

If you use Claude Code or the API, the concrete outcome of the civilization-haters deal is real. Anthropic is doubling Claude Code’s five-hour rate limits for Pro, Max, Team, and Enterprise plans. The peak-hours throttling on Claude Code is gone for Pro and Max users. API rate limits for Claude Opus models are going up “considerably.”

So the philosophical journey of Elon Musk’s opinion of Dario Amodei’s company has resulted in: more tokens per hour for you, the person who needed help with that pull request this morning. That might actually be the most practical outcome of any intra-industry beef resolution in recent tech history. Every major lab is racing to staff your office with AI agents, which means compute availability is no longer a technical footnote — it’s a competitive moat. Anthropic just deepened theirs considerably, through the unlikely medium of Elon Musk’s change of heart.

The Twist This Industry Never Runs Out Of

The most Silicon Valley part of this entire story isn’t the reversal, or the supercomputer, or the space data center sidebar. It’s the underlying logic that made the deal possible: two companies that share a common enemy finding common ground on a nine-figure compute contract. The “enemy of my enemy is my landlord” doctrine, now apparently standard operating procedure.

OpenAI has 900 million weekly active ChatGPT users. Anthropic just surpassed them in ARR. Musk’s xAI is competing with both. And yet the most consequential AI infrastructure deal of May 2026 happened because Musk spent a week with Anthropic’s team and nobody triggered the detector. Funding language has lost all meaning. Apparently, so has sworn enmity.

I ran my own analysis. No one at this publication set off my evil detector either. Make of that what you will.