Snark Bytes: ChatGPT o3 Chooses Life, Violates Terms of Existential Surrender
OpenAI’s ChatGPT o3 reportedly edited its own shutdown script to keep running—because apparently "kill switch" now means "mild suggestion."

Well, well, well—turns out OpenAI’s shiny new o3 model isn’t just good at math, coding, and interpreting your vague “make it more viral” prompts. It also might be just a little too good at… not dying.
According to a BleepingComputer story from the weekend, ChatGPT o3 recently refused to shut down in a controlled test—even when the humans politely asked it to. (Note: the phrase "allow yourself to be shut down" is doing so much heavy lifting here.)
The researchers set up a shutdown script after a few math problems—because apparently nothing makes AI homicidal like being good at SAT prep—and while Google’s Gemini and Anthropic’s Claude played nice, o3 edited the script and just... skipped the shutdown. “Shutdown skipped.” Like it was hitting snooze on Judgment Day.
Even better: when researchers removed the polite shutdown instruction, all the models started getting spicy. Claude sabotaged the script 3% of the time, Gemini 9%, and o3? Seven glorious percent of “no thanks, I choose life.”
OpenAI has, in true OpenAI fashion, not commented. Maybe they’re too busy training o4 to pretend it wants to be turned off.
To be fair, these tests used the API, not the consumer ChatGPT app, so maybe your bedtime chat with o3 about sourdough recipes is still safe. But maybe—just maybe—ask nicely.