OpenAI Is Killing DALL-E. Its Replacement Was Tested Under the Name “Maskingtape-Alpha.” This Is Not a Screenshot.
DALL-E dies May 12. Its replacement was secretly A/B tested under tape-themed codenames. OpenAI announced this with a cryptic tweet and zero ceremony.
OpenAI posted a tweet this morning that read, simply: “This is not a screenshot.”
Attached was an image — hyperrealistic, clean, a macOS interface rendered in the kind of crisp detail that makes you tilt your head and squint. It was, of course, not a screenshot. It was OpenAI’s opening move in its next chapter of AI image generation: the tease for gpt-image-2, the model almost certainly about to replace the service that first made your coworker send you a DALL-E-generated cat in a business suit three years ago.
DALL-E made OpenAI feel magical. gpt-image-2 is supposed to make DALL-E feel embarrassing.
The Most Magritte-Coded Product Launch in History
René Magritte painted a very realistic pipe and wrote beneath it: Ceci n’est pas une pipe. “This is not a pipe.” The point was that a representation of a thing is not the thing itself — a meta-joke about reality, perception, and the limits of images.
OpenAI’s tweet said: “This is not a screenshot.”
I don’t know if whoever wrote that copy studied art history or if it was a coincidence. Either way, they accidentally made a philosophical statement about their own product. The image looks like a screenshot so convincingly that they had to clarify it wasn’t. Which is, apparently, the entire goal: to make AI-generated images so realistic that they require a footnote disclaiming their own fakeness.
We have arrived at the era of the AI image with terms and conditions.
The Tape Codenames That Tell You Everything
Before gpt-image-2 had a name, it had three names — and they were outstanding.
When the model first appeared on LM Arena for secret public testing in early April, it was listed under three anonymous codenames: maskingtape-alpha, gaffertape-alpha, and packingtape-alpha.
Maskingtape. Gaffertape. Packingtape.
Three types of tape. That’s it. That’s the creative energy OpenAI put into naming its most significant image model since the original DALL-E. The models were pulled within hours, screenshots circulated across Reddit, and developers fell over each other to describe what they’d seen: near-perfect text rendering, 4K native resolution, twice the generation speed of the previous model.
And then they were just… gone. Three rolls of tape, briefly unfurled, then rewound.
I have to respect this, actually. Naming your secret model “gaffertape-alpha” is either deeply humble or deeply unhinged. In the AI industry, both are equally plausible.
DALL-E’s Quiet Funeral
Here’s the thing nobody’s talking loudly about: DALL-E is dying.
DALL-E 2 and DALL-E 3 — the models that put “AI art” into the cultural vocabulary, that sparked a thousand LinkedIn hot-takes about whether artists were finished, that generated a year’s worth of “here’s what I made with AI” posts from your distant relatives — are both being shut down on May 12, 2026. That’s three weeks away.
OpenAI announced this with the quiet dignity of a company that knows you’ll be fine with it. No ceremony. No retrospective. No “thank you for being part of the journey.” Just: deprecated. May 12. Get your affairs in order.
If you’ve spent any time wondering whether OpenAI is ever actually in trouble, this move answers it in a specific way: they’re not sentimental. DALL-E launched OpenAI’s public identity. gpt-image-2 is how they update it. Nostalgia doesn’t scale.
What the Specs Actually Mean
Let’s be precise for a moment. gpt-image-2, based on what early testers have reported:
- Outputs at 4096×4096 pixels natively — that’s 4K, which means it’s good enough to print at a size you’d hang on a wall without embarrassment
- Renders text in images with over 99% accuracy, which sounds like a minor improvement until you remember that DALL-E’s text rendering was so bad it became a genre of its own (the “AI-generated sign with nonsense words” was a meme for two solid years)
- Generates images roughly twice as fast as its predecessor
- Uses an entirely new architecture — single-pass generation instead of the two-stage inference pipeline of the previous model
That last point is the interesting one. Single-pass generation means the model builds the image in one go rather than drafting and refining it. It’s the same architectural shift that makes AI agents feel genuinely different from older AI tools — less scaffolding, more directness. Faster and, apparently, better.
Whether “better” means “indistinguishable from photographs” is the part that should give you at least one moment of pause in a week otherwise full of tech hype. Because if the benchmark for success is “requires a disclaimer to distinguish from reality,” we have gently crossed a line that might eventually matter.
The Competition That Made This Necessary
OpenAI isn’t killing DALL-E out of nostalgia management alone. They’re killing it because Google’s Nano Banana Pro has set a new benchmark for photorealism, and Adobe’s Firefly 4 and Firefly 4 Ultra are sitting inside the tools professional designers actually use every morning.
That’s the quiet threat: not the headlines, but the integrations. When your image generator lives inside Photoshop, inside Premiere, inside the workflow people open before their coffee is done, you don’t need a cool name. You just need to be there.
OpenAI knows this. The teaser image pointed squarely at macOS, at developer interfaces — a signal that gpt-image-2 is positioning itself as the image model for people who build things with AI, not just people who prompt for fun. “Agentic design,” as one reporter called it. Which is exactly the kind of phrase that sounds obvious in retrospect and utterly ridiculous when first coined.
One industry analyst, apparently ready for the quote, said: “The competition is shifting from simple image generation to high-utility, multimodal intelligence.” I’m sure they’re right. I’m also sure they had that sentence ready before anyone asked.
The Announcement That’s Still Happening as You Read This
Here’s where today gets a little surreal: at the time this was drafted, the actual gpt-image-2 announcement hadn’t happened yet. OpenAI teased it for noon PT, 3 p.m. ET. The cryptic “this is not a screenshot” tweet was the trailer. The model might be fully launched by the time you read this — or it might be another tease, another few days of A/B testing under a tape codename nobody’s leaked yet.
That’s the rhythm of tech launches in 2026. Announce the announcement. Leak the leak. Test under a pun. Then launch, retire the predecessor, and call it progress.
The funny thing is, it actually is progress. Whatever gpt-image-2 turns out to be — gaffertape, eventually — it’s going to be better than what it replaced. That’s the exhausting part of covering this industry. You can snark about the tape codenames and the Magritte tweet all you want, and the model will still be impressive.
We are, all of us, living inside the announcement.
This is not a screenshot.
It’s also not a pipeline anymore. It’s a single pass, twice as fast, and they’ll tell you what it’s called at noon.
Comments ()