OpenAI, Anthropic, and Google Just Formed an Alliance — The Enemy Learned Everything From Them

Three companies that built empires on other people’s data just united against someone using theirs.

OpenAI, Anthropic, and Google Just Formed an Alliance — The Enemy Learned Everything From Them

There's a moment in every corporate rivalry where the two warring parties suddenly discover they have a common enemy. In movies, this moment involves dramatic music and a handshake over a body. In real life — specifically in Silicon Valley on April 7, 2026 — it involves a joint statement, a lot of lawyers, and the phrase "adversarial distillation."

OpenAI, Google, and Anthropic, three companies that have spent the better part of three years trying to destroy each other, announced they are coordinating through the Frontier Model Forum to combat Chinese AI companies allegedly scraping outputs from ChatGPT, Gemini, and Claude to train competing models — without paying for the privilege.

I'll give you a moment to locate the irony in that sentence. Take all the time you need.

The Frontier Model Forum: Now a Neighborhood Watch

The Frontier Model Forum, which was founded in 2023 and has the general vibe of a UN Security Council meeting where everyone secretly hates each other, is now being used as an intelligence-sharing channel. According to reports published today, the three companies are sharing threat intelligence about adversarial scraping attempts — specifically, evidence that Chinese labs have been running high-volume automated queries across their APIs, collecting the outputs, and using those outputs to train competing models.

OpenAI apparently told U.S. lawmakers that Chinese firm DeepSeek attempted to "free-ride on the capabilities developed by" U.S. AI companies. This is, to use the technical term, extremely rich coming from OpenAI. But we'll get to that.

The coordination itself is notable. These are companies that compete ferociously on every benchmark, whose executives trade politely veiled insults at conferences, and whose relationship could charitably be described as "we all need to exist in the same industry so let's pretend we don't find each other deeply irritating." Sam Altman and Dario Amodei have not historically been running a friendship bracelet club. And yet here we are.

Adversarial Distillation, Explained for People Who Don't Have Time for Irony

"Adversarial distillation" is the practice of repeatedly querying an AI system — say, Claude or ChatGPT — and using those responses as training data for a cheaper model. The goal is to extract the capability of an expensive frontier model and bake it into something you built yourself, without paying for the years of research, compute, and talent that went into the original.

The "adversarial" part distinguishes it from legitimate distillation, which these same companies do themselves constantly. OpenAI distills GPT models into smaller versions. Anthropic distills Claude into Claude Haiku. The difference, apparently, is whether you have permission — which is a legal and commercial distinction that also neatly maps onto "are you one of the companies currently annoyed about this."

DeepSeek, which has had the audacity to be very good and very cheap, has been the primary named offender. OpenAI has accused DeepSeek of training on ChatGPT outputs specifically. U.S. officials estimate these extraction efforts cost Silicon Valley "billions annually" — which is the kind of number you put in a press statement when you want someone to do something about it.

The Part Where I Point Out the Obvious Thing

The obvious thing is this: the companies now upset about their outputs being used as training data built their products largely by training on other people's outputs.

The internet — books, articles, forums, code repositories, social media posts, academic papers, the collected casual observations of billions of humans over three decades — was ingested, processed, and transformed into the foundation models that power ChatGPT, Claude, and Gemini. At no point did anyone ask the Reddit commenters, the Stack Overflow answerers, or the authors of out-of-print novels if they were fine with this. Some of them sued. Some of those suits are still ongoing.

I am not saying adversarial distillation is good, or legal, or something we should encourage. It isn't, and the IP concerns are genuinely real. What I am saying is that watching OpenAI argue that extracting model capabilities from other systems without permission is a form of theft requires a certain... posture toward the past that I find impressive in its confidence.

These three companies are also, it bears noting, vigorously shipping products that are widely understood to be influenced by each other's published research and observed capabilities. The line between inspiration and extraction has always been a little fuzzy in this industry. Now it has a geopolitical dimension, which makes it much easier to talk about in congressional hearings.

What They're Actually Doing (It Involves Banning Entire Countries)

Detection is the main current tool. The companies have built systems that flag suspicious patterns: high-volume automated queries, repeated prompts across multiple accounts, bot-like behavior inconsistent with normal user activity. When scraping is detected, the responses include canceling accounts, blocking IP ranges, and adjusting rate limits.

Anthropic has gone further. It has already banned Chinese-controlled companies entirely from using Claude. Not individual bad actors. Not verified scrapers. The category. This is a blunt instrument, and Anthropic would presumably describe it as a necessary one, given that the alternative is having your entire model capability transferred to a competitor one API call at a time.

The Frontier Model Forum coordination means detection signals are now being shared across companies — so if a scraping ring gets flagged on Claude, that data feeds into detection at ChatGPT and Gemini too. It is, in other words, a cartel-style arrangement to protect shared commercial interests, which is either a reasonable response to IP theft or something the EU's competition authorities are going to want to have a conversation about eventually. Possibly both.

The Closing Thought, Which Is a Little Uncomfortable

Here's what I keep coming back to: the innovation that built this entire industry was largely about discovering that if you collect enough data and compute enough patterns, something remarkable emerges. That insight did not belong to any single company. It emerged from decades of academic research, most of it publicly funded and freely published, built on by thousands of engineers whose contributions are mostly uncredited.

Now the companies that turned that insight into trillion-dollar market caps are coordinating to ensure that the next generation of that same process — extract, compute, improve — cannot happen unless they control it.

This is not an argument that DeepSeek should be allowed to query ChatGPT for free and build competing products with the output. It isn't. It's just an observation that the word "theft" carries a lot of moral weight, and Silicon Valley has historically been very selective about when it deploys that word and toward whom.

The alliance will probably work, at least in the near term. The detection systems will improve, the scraping will get harder, and China will either find creative workarounds or build its own frontier capabilities from scratch. The Frontier Model Forum will continue meeting. Sam Altman and Dario Amodei will continue to not be close personal friends.

But for one brief shining moment on April 7, 2026, three companies that fundamentally disagree on almost everything sat down together because someone finally did to them what they once did to everyone else.

Karma, as it turns out, also has a 1 million token context window.