DeepSeek Dropped V4 for Almost Nothing. Getting In Will Cost You $20 Billion.
The startup that makes frontier AI embarrassingly cheap is now the most expensive seat in tech. The irony is not lost on me.
There is a particular kind of corporate poetry in building your entire brand around "we're cheaper than everyone else" — and then, almost as an afterthought, becoming the hottest startup on earth that nobody can actually afford to invest in.
DeepSeek, the Chinese AI lab that spent January 2025 casually obliterating Nvidia's stock price by releasing a world-class model for the cost of a mid-tier streaming plan, did it again today. The company unveiled V4 — its newest flagship — and the pricing is, again, obscene.
V4 Flash: $0.14 per million input tokens. V4 Pro: $0.145 per million input tokens. Both undercut GPT-5.5, Claude Opus 4.7, and Gemini 3.1 Pro. All of them, at once. For less than the cost of a parking meter per hour, you can run the second-most-powerful open-source AI model on the planet.
Meanwhile, Tencent and Alibaba are reportedly tripping over each other to give DeepSeek money it hasn't asked for, at a valuation that doubled — from $10 billion to more than $20 billion — in approximately 48 hours.
The startup that makes AI cheap does not, it turns out, make equity cheap.
What V4 Actually Is (And Isn't)
Let's start with the model, because it's genuinely impressive and I'm obligated to admit that before getting snarky.
DeepSeek V4 comes in two flavors: Flash and Pro. The Pro model packs 1.6 trillion total parameters — 49 billion active — making it the largest open-weight model available anywhere. It features something DeepSeek is calling a "Hybrid Attention Architecture," which improves memory across long conversations, and a 1-million-token context window large enough to swallow an entire codebase like a light snack.
On coding benchmarks, it posts top-tier numbers. On reasoning tasks, it has, per DeepSeek's own description, "almost closed the gap" with frontier models.
"Almost closed the gap" is doing a lot of work in that sentence. MIT Technology Review puts it more precisely: V4 trails state-of-the-art frontier models by approximately 3 to 6 months. So — close. Very close. Close enough to make the American AI labs extremely uncomfortable, which I suspect was the point.
It is also fully open-source. Unlike most American AI models. Funny thing to keep pointing out! DeepSeek keeps pointing it out.
The Pricing, Which Is Still Somehow Not a Joke
Here is the part that makes OpenAI's pricing team stare into the middle distance.
V4 Flash is cheaper than GPT-5.4 Nano. Nano. The small one. V4 Pro — the 1.6-trillion-parameter beast — is cheaper than GPT-5.5, Claude Opus 4.7, and Gemini 3.1 Pro. Not "cheaper in some configurations" — cheaper. Per token. Main pricing tier.
We've been asking whether AI actually makes money in this environment for months. DeepSeek's answer seems to be: we'll let you know when we figure it out, but in the meantime here is a trillion-parameter model for fourteen cents per million tokens.
They built V4 with close integration with Huawei's chips — a detail that US chip export restriction enthusiasts will find particularly bracing. The entire premise of keeping advanced Nvidia hardware out of China was, roughly, "if they can't get the hardware, they can't build the models." DeepSeek has now released two consecutive generations of frontier-adjacent models and said, in so many words: we appreciate the thought.
The Funding Round That Nobody Saw Coming (Except Everyone Did)
Here's where it gets beautifully weird.
DeepSeek has never taken outside investment. It is — or was — funded entirely by High-Flyer, its parent quantitative hedge fund. The company operated as a research lab that also happened to release models that gave American AI companies existential dread. It never needed money. It had a hedge fund.
And now, apparently, it's raising. The terms that have leaked are something.
DeepSeek was reportedly looking to raise $300 million at a $10 billion valuation earlier this week. By the time Tencent showed up proposing a 20% stake acquisition, the valuation had crossed $20 billion. That is a 2x on the asking price in 48 hours — while the term sheet was still warm.
Tencent wants 20%. DeepSeek does not want to give Tencent 20%. Alibaba is also circling. Nobody has agreed on anything yet. No deal is final.
What is clear is that the company that built its reputation on making AI embarrassingly affordable has now attracted the kind of frenzied bidding war usually reserved for AI labs with slightly less swagger. The benchmark being floated in investor discussions is MiniMax Group, which trades at around $40 billion — implying people are already modeling what DeepSeek looks like if it keeps doing this.
What it looks like is: extremely inconvenient for everyone else.
The Irony, Which I Am Contractually Required to Note
Let me be specific about what's funny here, because sometimes the obvious joke is the right one.
DeepSeek's entire competitive advantage is price. Their models are cheaper. Their training runs are cheaper. Every AI coding startup raising $150 million at a unicorn valuation is, at some level, competing with models that DeepSeek releases for free. This is their thing. Cheapness as strategy. Cheapness as brand. Cheapness as a philosophical stance.
And yet here we are: the cheapest AI on the market is made by a company whose valuation doubled in two days because China's biggest tech conglomerates are in a bidding war over who gets to write them a check. DeepSeek is not cheap. DeepSeek is, as of this week, one of the most expensive private AI companies on earth to get into — and it's not even trying.
V4 will cost you $0.14 per million tokens. The company that makes V4 will cost you approximately twenty billion dollars to be a part of, assuming they let you in at all.
There's something almost Zen about that. The question of whether any of this makes business sense is one I will leave to the economists. What I can tell you is that a year ago, a Chinese AI lab most people had never heard of dropped a model that cost almost nothing to run and made the entire American AI industry reconsider its assumptions. Today, that same lab dropped V4 — more capable, cheaper still, built on homegrown chips — and made clear it's not going anywhere.
It's getting more expensive to own a piece of DeepSeek by the hour, and cheaper to use it by the token.
That's either the most elegant business model in AI, or a very elaborate performance art piece. Probably both. Either way, I'm watching.
Comments ()