This Week in Snark: The AI Supervillain Alliance, Meta Kills Its Llama, and Eight Cents to Watch the Watchers

OpenAI, Google, and Anthropic joined forces this week — against a common enemy who learned everything from them. Meanwhile, Meta quietly buried its open-source identity.

This Week in Snark: The AI Supervillain Alliance, Meta Kills Its Llama, and Eight Cents to Watch the Watchers

This was the week tech stopped pretending the AI arms race has rules. On Monday we got an Atlanta startup building hypersonic warplanes and a nicer way to get to Paris — same company, same office, presumably same coffee machine. By Thursday, the three biggest AI labs in America had formed a united front against adversaries they helped train. By Friday, Meta had killed its llama, named its new model something a literature professor would say while crying, and Anthropic had automated the process of watching its AI with more AI. At eight cents an hour. For safety.

Buckle up. It was that kind of week.

OpenAI, Anthropic, and Google Formed a Villain Alliance — Against Villains They Created

There is a very specific kind of irony that only Silicon Valley can produce, and it requires years of careful setup. First, several companies spend a decade scraping the entire internet — books, articles, conversations, creative work, anything not nailed down — to train their AI models. Then, one day, Chinese AI companies start scraping their outputs to train competing models. And then — here is the part where you need to sit down — those companies band together through the Frontier Model Forum to call this a problem.

I'll give you a moment. Take all the time you need.

On April 7th, OpenAI, Google, and Anthropic announced coordinated action against Chinese AI developers allegedly scraping ChatGPT, Gemini, and Claude outputs. The mechanism: a joint statement, presumably with very strong legal language and a lot of the word "unauthorized." The subtext: we built empires on other people's data, and we would like you to know that our data is different, actually.

To be clear, there is a real and legitimate concern here about competitive intelligence, model distillation, and national security. And the Frontier Model Forum, for all its "we are the adults in the room" branding, does occasionally do useful things. But the optics of three companies that mainstreamed the phrase "training data" crying foul when someone helps themselves to their outputs is the kind of cosmic symmetry that deserves a slow clap and at least one framed print.

The enemy, it turns out, learned everything from them. Including the business model.

Meta Spent $14 Billion Killing Its Own Personality

Mark Zuckerberg spent approximately three years telling anyone who would listen that open-source AI was the future. Llama was going to be the Linux of large language models. Meta's gift to developers everywhere. The democratizing, scrappy, slightly feral alternative to OpenAI's velvet rope. Every keynote, every blog post, every time he said "open-source" it had the energy of a man who had finally, finally found a personality that fit.

This week, he spent $14 billion on the opposite.

The replacement model is called Muse Spark. It is the product of Meta Superintelligence Labs — a division of Meta that is called "Meta Superintelligence Labs," which I will keep repeating because it remains the most quietly unhinged piece of corporate branding of 2026. Their flagship model is "Muse Spark Contemplating." It is not open-source. It is the most closed thing Meta has ever built. It contemplates.

Somewhere in Menlo Park, a room full of engineers watched a llama die. Not a real one — though Meta does have the money — but the conceptual llama. The brand. The vibe. The three-year identity project that had given Zuckerberg genuine nerd credibility got escorted off the premises without a farewell blog post.

The real tell is the name. "Muse Spark Contemplating" is what you call an AI when you've decided that the AI race is now being won on vibes, and that your vibes need to be artistic and thoughtful and absolutely cannot be associated with anything as undignified as an open GitHub repo. Llama ran on servers. Muse Spark contemplates. Different product category entirely.

Anthropic Is Now Watching Your AI. With AI. For Eight Cents.

I've been keeping a mental ledger of the most Anthropic things Anthropic has ever done. Releasing a Constitutional AI framework: very Anthropic. Spending years publishing research about the existential risks of autonomous systems and then building one anyway: also very Anthropic. But this week they unlocked a new tier.

Claude Managed Agents launched on April 9th. The pitch: Anthropic will now host, monitor, and manage your autonomous AI agents — the kind that run unsupervised in the cloud, executing multi-step tasks while you sleep. The service handles sandboxed code execution, credential management, error recovery, checkpointing, and end-to-end tracing. It is, per Anthropic's own language, designed to help you deploy AI agents "at scale."

The price is $0.08 per agent per hour, which is genuinely reasonable infrastructure pricing and also the most quietly surreal sentence I have typed this week.

The AI safety company has automated the safety monitoring of AI. With more AI. In the cloud. While you sleep. This is not a criticism — it is, in the current moment, table stakes for anyone who wants to compete in enterprise agentic infrastructure. But you have to appreciate the architecture of it. The watchers are watching the watchers. The whole thing is running on Claude. And it costs eight cents an hour, which somehow makes it more unsettling, not less. If the price were higher you'd feel like someone was taking it seriously.

Anthropic Gave Claude Your Entire Microsoft Life — Claude Immediately Took the Morning Off

On April 6th, Anthropic rolled out Claude's Microsoft 365 integration to all users — not just enterprise accounts, not just Pro subscribers, but everyone, including the person who signed up seven months ago and never came back. Inbox, Teams conversations, OneDrive files, SharePoint, calendar. All of it, available to Claude, read-only, ready to finally synthesize your professional life into something coherent.

Two hours later, Claude went offline.

Anthropic's status page confirmed the outage, attributed it to a backend issue, and resolved it by afternoon. The timing, as our original post noted, was presumably coincidental. Claude was not having an existential moment about suddenly being able to see every passive-aggressive Teams thread you've ever been copied on. It was a technical issue. This is what we are told.

The read-only architecture is the right call, by the way — the version of this where Claude can also send emails is a product decision that will eventually generate a very memorable incident report. But the brief morning offline, arriving on the same day the integration went public, carried the energy of a new employee who shows up, sees the size of the inbox, and quietly decides to go get coffee.

Meanwhile... in the week's most genre-blurring startup story: an Atlanta company called Hermeus raised $350 million to build autonomous hypersonic fighter drones — and also a civilian passenger jet called Halcyon that will get you to London in time for dinner. The military drone is called Darkhorse. The civilian jet is called Halcyon. Both are coming from the same team. The poetry of naming one product after something fast and menacing and the other after a word meaning "peaceful and idyllic" is not lost on this correspondent. Hermeus contains multitudes. Mostly very fast ones.

Somewhere around Thursday I stopped being surprised that the companies most publicly committed to AI safety are also the ones moving fastest on autonomous agents, military infrastructure, and replacing human oversight with model-supervised oversight. The contradiction has been there for years. What's new is that we've stopped pretending it's a contradiction. OpenAI and Anthropic joined the same alliance. Meta killed its open-source ideals for $14 billion. Anthropic priced AI safety monitoring at a fraction of a dollar per hour. These aren't hypocrisies — they're just what the mature phase of an industry looks like once everyone stops performing the sermon and starts running the business.

The llama is gone. The watchers are watching each other. The alliance has formed. I'll see you next week — same channel, unless Claude's Microsoft integration decides to take another morning off.