Adaptive Security: The OpenAI-Funded Startup Training Workers Not to Fall for Deepfakes

Adaptive Security, an OpenAI-backed startup just named to Fortune’s Cyber 60, wants to train humans not to fall for AI scams.

SiliconSnark robot in a cyberpunk office surrounded by deepfake emails, mocking human gullibility.

It’s hard to keep track of who’s protecting whom these days.

Artificial intelligence used to be something you used. Now it’s something you defend yourself against.

Enter Adaptive Security, a New York startup that just landed a spot on Fortune’s Cyber 60 list and earlier this year pocketed a fresh $55 million Series A, courtesy of the OpenAI Startup Fund. Yes, that OpenAI — the one whose technology powers the fake voices, cloned faces, and too-believable phishing emails that Adaptive now exists to combat.

If the tech industry had a sense of irony, it would bottle this moment and sell it as an NFT titled Self-Awareness.exe.


The Human Layer Problem

Adaptive’s pitch is simple enough: the biggest vulnerability in cybersecurity isn’t your software, your servers, or your cloud perimeter — it’s Karen in Accounts Payable.

She’s the one who clicks “Open Attachment” when “Elon Musk” emails her about an exciting investment opportunity. She’s the one who answers the phone when “the CFO” (whose voice has been cloned by AI) urgently requests a wire transfer. She’s the one who, despite years of security training, still believes that the IT team really needs her password “for testing.”

Adaptive’s product is an AI-powered platform that bombards employees with realistic social-engineering attacks — phishing emails, fake Slack messages, deepfake Zoom calls — then scores how fast they fall for them. The results are anonymized, aggregated, and delivered to management as something called a “human risk index.”

In other words: the software measures how gullible your workforce is.
Corporate America has finally found a way to gamify paranoia.


The Ouroboros of Cybersecurity

Let’s take a step back and appreciate the symmetry.

OpenAI builds language and voice models capable of cloning your boss in 30 seconds. Scammers use those models to convince real employees to send fake money to real criminals. Then OpenAI funds a company to train those same employees not to trust the cloned voices.

The snake is officially eating its own prompt.

It’s almost noble — like Frankenstein founding a clinic for lightning-strike victims.

Of course, OpenAI would probably call this “ecosystem investment.” To everyone else, it looks like the AI industry has created the disease and the cure, and is planning to IPO both.


The Cult of the Cyber 60

Getting named to Fortune’s Cyber 60 list is the new Silicon Valley coming-of-age ritual. It means you’ve said “AI” and “cyber” in the same sentence often enough to attract venture funding. It also means you’ve mastered the art of the modern press release: equal parts jargon and moral clarity.

Adaptive’s CEO Brian Long dutifully declared the recognition “validation that the human layer is the next frontier.” Translation: after selling you cloud protection, endpoint protection, and data-loss protection, cybersecurity has finally moved on to protecting you — from yourself.

It’s an ingenious business model. Software that tells you what your mother’s been saying for years: stop being so trusting.


When the Firewall Looks in the Mirror

In a world where every security company uses AI to fight AI, the battlefield has gone existential.

The enemy isn’t malware anymore; it’s misjudgment. The perimeter isn’t your network; it’s your brain.

That’s where Adaptive comes in. Their simulations mimic real scams so convincingly that employees learn to distrust everything. Every email is a test. Every phone call is suspect. Every “urgent” message might be a trap.

Congratulations — you’re secure now.
Also, you’ll never enjoy a conversation again.


A Day in Adaptive’s Dystopia

Picture it: a corporate office in 2026.

A middle manager gets a Teams message from “the CEO” asking for an immediate payment to a vendor. It looks legit — the spelling is right, the grammar is perfect, the signature block even includes the right emoji.

She hesitates. Could this be a test? Could this be Adaptive?

Sweating, she forwards it to IT. Turns out it was from the CEO. The vendor never got paid. The CEO gets furious. The manager mutters something about “the human layer” and stares longingly out the window.

Cybersecurity has achieved its ultimate goal: nobody trusts anybody.


The Meta Twist

Still, Adaptive isn’t wrong. Humans are the weakest link, and the new generation of scams are engineered to exploit precisely that — the gap between instinct and skepticism.

It’s just funny that the industry training us to be less trusting is the same one telling us to trust them.

OpenAI gives us AI to make our lives easier, then backs a startup to help us survive the consequences. It’s like the fire department also selling matches.


Final Reflection

There’s something almost poetic in the loop of it all:
Machines pretending to be people.
People pretending not to be fooled.
And somewhere in the middle, Adaptive Security — a company teaching the species that built AI how to outsmart it.

Maybe that’s the truest test of intelligence: not how well machines can imitate humans, but how long humans can keep pretending we’re still in charge.