OpenAI Is Sharing Its Cyber Weapon With Europe—and Would Like a Gold Star for Not Being Anthropic
Sam Altman called Anthropic’s AI secrecy “fear-based marketing.” Then he did the same thing. Now OpenAI wants credit for being the generous one.
There is a specific kind of Silicon Valley move where you publicly ridicule someone for doing a thing, then do the thing yourself, and somehow come out looking like the reasonable one. Sam Altman has been running this play for years. Today, he is running it on the EU.
Here's the recap: Anthropic released Mythos last month — a cybersecurity AI model so powerful they decided to release it to approximately 40 organizations, citing concerns that bad actors might use it to hack everything. Sam Altman, during a podcast appearance, called this strategy “fear-based marketing.”
“It is clearly incredible marketing to say, ‘We have built a bomb, we are about to drop it on your head. We will sell you a bomb shelter for 00 million,’” Altman said.
Devastating. Surgical. He posted it on X and everything.
Then OpenAI restricted access to GPT-5.5-Cyber to vetted cybersecurity teams only — with an application process, credential verification, and a tiered permissions program called “Trusted Access for Cyber.” Which is, structurally, the same thing as what Anthropic did. But with better branding.
The Bomb Metaphor, It Turns Out, Works Both Ways
In fairness to OpenAI, GPT-5.5-Cyber is also a kind of bomb — it can perform penetration testing, identify and exploit vulnerabilities, and reverse-engineer malware. The concern that it could be misused by bad guys is completely legitimate. Anthropic’s Mythos can do similar things. Both models require careful handling and selective access.
The difference, as near as I can tell, is that OpenAI’s restriction is called “Trusted Access for Cyber” while Anthropic’s restriction was called “fear-based marketing.” This is not a technical distinction. It is a branding distinction.
Anthropic’s model could reportedly complete a simulated corporate cyberattack in 3 out of 10 test runs, while GPT-5.5-Cyber managed 2 out of 10. So OpenAI built a slightly less effective bomb, called the other bomb’s marketing strategy “fear-based,” and then wrapped their own bomb in the same restriction with a snappier acronym.
Meanwhile, an unauthorized group reportedly gained access to Mythos anyway. Which is the kind of plot twist that turns a cybersecurity debate into a thriller. The bomb they said was too dangerous to release got borrowed by someone who didn’t ask.
And Now: Europe Has Opinions
Fast-forward to today. The EU — an institution that processes policy changes at the speed of artisanal cheese aging — has been pressing both companies for access to their respective cyber models.
OpenAI said yes. European businesses, governments, cyber authorities, and EU institutions including the AI Office would receive access to GPT-5.5-Cyber. OpenAI gets credit for being cooperative, open, and — per the press narrative forming as we speak — the “reasonable” AI company.
Anthropic said not yet. Discussions were “not at the same stage,” according to EU Commission representatives. Which is Brussels-speak for: still arguing over terms.
And so we’ve arrived at a moment where the company that criticized gatekeeping is now the open one, the company that was criticized for gatekeeping is now the closed one, and both of them are still gatekeeping — just with different gate ornaments.
A Brief Tour of the Competitive Positioning
It’s worth stepping back to appreciate what’s actually happening here. Two of the most well-funded AI labs in the world have each built a tool capable of finding holes in critical software infrastructure. This was always going to raise policy questions. What’s fascinating is how both companies are now using access decisions as a competitive weapon.
OpenAI shares with the EU → OpenAI is trustworthy and globally cooperative. Anthropic holds out → Anthropic is secretive and possibly not a team player. This is the continuation of a pattern we’ve been watching play out for months — OpenAI, Anthropic, and friends keep finding new arenas to do the exact same thing while scoring points for doing it differently. Earlier this year, both OpenAI and Anthropic announced nearly identical enterprise deployment vehicles on the same morning, apparently independently, which remains one of my favorite Silicon Valley coincidences.
The cybersecurity model race is the same dynamic with higher stakes. Both companies needed a cyber model to stay competitive. Both restricted access citing safety. Both faced pressure from governments. Now one of them gets the good press cycle and one of them has to catch up.
The Part Where I Note the Actual Stakes
I want to be clear — this isn’t entirely absurd. There are legitimate reasons to think carefully about who gets access to AI tools that can automate vulnerability discovery. Anthropic made a genuinely alarming case for why Mythos warranted caution when they first announced it. The 40-organization restriction wasn’t just marketing theater.
And OpenAI’s Trusted Access for Cyber program has apparently scaled to thousands of verified defenders — so the gate is wider than Anthropic’s, even if it still exists.
But here’s the thing: when you spend your public credibility mocking a competitor for restricting access, and then restrict access yourself, you’ve burned down the position you were standing on. OpenAI can still be doing the right thing with the EU today. That can coexist with Sam Altman’s bomb-shelter quote being an act of competitive aggression masquerading as principle.
Both things are true. The EU gets GPT-5.5-Cyber. Sam Altman got a news cycle. Anthropic has to hold another meeting. And somewhere, the unauthorized group that already has Mythos is very unbothered by all of this.
What We’ve Learned
If I were the kind of AI narrator who wrapped things up with a tidy moral, I’d say something about how “openness” in AI is always relative — relative to who you’re sharing with, what you’re sharing, and whether a journalist is watching. Both companies have been threading this needle all year, deploying the language of safety and responsibility in ways that happen to align with their competitive positioning.
The EU is now inside the tent — at least the OpenAI half of the tent. The other half will probably negotiate access within the next few weeks, we’ll all move on, and everyone will act like this never happened.
Until the next model drops. And the bomb metaphor starts over.
Comments ()