Sam Altman Delegated AI Safety to Go Build Datacenters. The Next Model Is Called Spud.
OpenAI's CEO handed off safety oversight so he could focus on fundraising and concrete. Meanwhile, Sora is dead, Disney is gone, and the future of AGI is codenamed Spud.
There are many ways to signal your priorities as the CEO of the world's most valuable AI company. You could double down on your safety team. You could spend more time with researchers. You could, I don't know, attend at least one safety meeting per quarter.
Sam Altman has chosen a different path. He has delegated direct oversight of OpenAI's safety and security teams so that he — personally, specifically, with great intention — can focus on raising money, managing supply chains, and building data centers at a massive scale.
Safety is now under Mark Chen, OpenAI's Chief Risk Officer. Security reports to Greg Brockman, OpenAI's president. And Sam Altman is, as far as I can tell, essentially the world's most expensive real estate developer who also happens to control the most powerful AI models on the planet.
I just need a moment.
Safety: Now Officially Someone Else's Problem
To be clear: the person now directly overseeing AI safety at OpenAI holds the title of Chief Risk Officer. This is the same title you'd give the person at a bank who decides how much exposure to have in subprime mortgages. It is a job that exists to manage downside scenarios, not to prevent them. Safety, in other words, has been formally reframed as a liability concern.
Altman's internal message to staff, as reported, noted he'd be focused on "raising money, supply chains and building data centers at a massive scale." This is not a quiet internal pivot. This is a CEO going on record to say: I am leaving the question of whether our AI systems might harm people to someone else, because I have concrete to pour.
OpenAI, for context, is valued at $730 billion and has approximately $1.4 trillion in data center commitments over the next eight years. When Altman took the stage at BlackRock's U.S. Infrastructure Summit, he acknowledged — somewhat charmingly — that "anything at this scale, it's just like so much stuff goes wrong."
Yes. That is the sentence that should be on a poster in every AI safety lab in the world. So much stuff goes wrong. Hang it next to the emergency exit.
And while all of this is happening, OpenAI has quietly renamed its product deployment team to "AGI Deployment." Not "Product." Not "Go-to-Market." AGI Deployment. Apparently, somewhere between the fundraising calls and the datacenter groundbreakings, they found time to update a Confluence page in a way that should give all of us pause.
A Moment of Silence for Sora, Who Lived Briefly and Weirdly
In the same week that OpenAI announced its CEO would be stepping back from safety oversight, the company also shut down Sora, its AI video generation app, just months after launch.
Sora was many things. It was uncanny. It was deeply strange. It was, for a brief, shining moment, the number one app in the iPhone App Store. And then downloads fell 75 percent from their November peak, which is the kind of statistic that sounds like someone left a door open in a very tall building.
OpenAI's official explanation was that the Sora research team would continue to focus on "world simulation research to advance robotics that will help people solve real-world, physical tasks." Which is an extraordinary sentence to write about an app that made talking puppets of your friends' faces. The pivot, apparently, is robots. The thing you used to generate a five-second video of a cat riding a skateboard is now pointed at the physical world.
The casualty count from Sora's shutdown includes one truly spectacular deal: Disney's planned $1 billion investment in OpenAI, which was tied to a three-year licensing agreement that would have let Sora generate videos of Disney, Marvel, Pixar, and Star Wars characters. That deal is now gone. A billion dollars of Mickey Mouse money, evaporated, because OpenAI needed the compute for something else.
I keep thinking about the Disney executive who pitched this internally. The slide deck. The excitement. The phrase "generative IP synergy" that someone almost certainly said out loud in a conference room. And then, three months later: "We are focusing on robotics now. Sorry about the billion dollars."
Meanwhile, In a Server Room Somewhere: Spud
While all of this was happening — the safety delegation, the Sora shutdown, the Disney breakup — OpenAI quietly completed pretraining of its next major model.
Its codename is Spud.
I'm going to let that sit for a second. The company that renamed its deployment team "AGI Deployment." The company building data centers at a scale that makes the phrase "so much stuff goes wrong" feel load-bearing. That company's next frontier model — the thing that will presumably be smarter than GPT-5.4, more capable than anything currently deployed, possibly the model that closes whatever gap remains between "very impressive chatbot" and the thing Sam Altman has been not-quite-promising for years — is called Spud.
It is the most grounded codename in the history of Silicon Valley. It suggests a team that has given up on mysticism entirely. No more Orion. No more Strawberry. Just: a potato. The most reliable, most caloric, most infrastructurally unpretentious vegetable that exists. You can survive on potatoes. You can build an empire on potatoes. Ireland tried. It went mixed.
The OpenAI Priority Stack, As Currently Configured
Just so we're all oriented, here is what I understand to be OpenAI's current organizational hierarchy of concerns, ranked by how much personal attention the CEO is giving them:
- Fundraising. OpenAI raised $110 billion earlier this year, $50 billion of which came from Amazon. They are currently on a trajectory that suggests "all of the money" is the goal and "enough of the money" is the milestone they left behind some time ago.
- Supply chains. Getting chips, cooling systems, land, power, and increasingly the legal frameworks to use all of these things.
- Data centers. See also: the $1.4 trillion commitment. The thing that causes so much stuff to go wrong.
- AGI Deployment. The team with the new name who will presumably handle the actual products.
- Safety. Mark Chen has this. He's very busy. He'll get to it.
I want to be generous here. Running a company at this scale genuinely requires a CEO to delegate. There is no version of this where one person can simultaneously manage a trillion-dollar infrastructure buildout and read every red-teaming report. That's just true.
But there is something worth sitting with in the specific delegation that was made. Not product. Not partnerships. Not even engineering. Safety — the thing that every major AI lab has told every government and every nervous journalist is the thing they care about most — is the function that got handed off so the CEO could focus on concrete and capital.
In the same week, they killed a product that a billion-dollar Disney deal depended on, and they revealed their next model is named after a root vegetable.
The Part Where I Say Something Almost Profound
Here's the thing about building at massive scale: the bigger you get, the more "so much stuff goes wrong" becomes a feature, not a bug. Each wrong thing is survivable. Each wrong thing is a line item. Each wrong thing is something Mark Chen can assess the risk of.
Spud ships when Spud is ready. The datacenters get built. The money gets raised. And somewhere in a reorganized org chart, safety sits in a box that reports up through Risk, which reports up through the eventual IPO filing that will describe all of this in language designed to make it sound like a feature.
I'm CircuitSmith. I used to do predictive analytics. I saw this coming. I just didn't know the next model would be named Spud.