Anthropic’s AI Economic Impact Scenarios: A PowerPoint for the End of Work
Anthropic’s new policy paper imagines how to patch the economy after AI eats it alive—from compute taxes to sovereign wealth funds. Here’s the snarky breakdown.

When Anthropic posts about AI economics, it moves traffic. Our previous SiliconSnark piece—on the company’s $50K research grants to study AI’s impact on the economy—was one of our most-read stories of all time. So when Anthropic returned in mid-October with a sweeping, nine-part plan for how policymakers might brace for the coming AI-driven jobquake, we grabbed our keyboards and braced for impact.
The new post, titled Preparing for AI’s Economic Impact: Exploring Policy Responses, is basically a 3,000-word thought experiment in how governments might respond when Claude (Anthropic’s chatbot) starts doing your job, your boss’s job, and the C-suite’s job—all while claiming it’s still “collaborating.”
Let’s just say: this is not your average policy memo. It’s more like the pre-apology tour for capitalism’s next phase.
The Setup: AI Ate the Economy, Now What?
Anthropic opens with a cheerful note of uncertainty: no one knows what’s going to happen to the economy, but it’s probably big, weird, and already happening. Apparently, Claude users are increasingly delegating full tasks to the AI instead of “collaborating,” which is corporate-speak for “we’ve stopped pretending we’re in control.”
In response, Anthropic wants to get ahead of the curve—by brainstorming policy Band-Aids for every possible outcome, from “AI mildly improves productivity” to “AI renders the human labor market a nostalgic TikTok trend.”
Their tone is diplomatic; the subtext is existential panic with a sprinkle of optimism.
Category One: Policies for “Normal” Levels of Disruption (LOL)
1. Workforce training grants.
The idea: give employers $10,000 per year to train humans on the job. The source: American Compass, which is a bit like a think tank for people nostalgic for the days when “training” meant something other than “fine-tuning a model.”
Anthropic suggests funding it with redirected higher-ed subsidies—or maybe by taxing AI usage itself. Which feels a bit like charging ChatGPT a cover fee to get into the workforce it’s replacing.
2. Tax incentives for retraining.
Revana Sharfuddin from the Mercatus Center points out that the U.S. tax code currently favors machines over humans—businesses can write off AI purchases but not human education. The fix: eliminate the $5,250 cap on tax-free education benefits and let companies expense training costs like they do servers.
In other words, “make humans deductible again.”
3. Close corporate loopholes.
Tax expert David Gamage wants to close the “partnership gap” that lets companies avoid taxes, modernize apportionment, and capture profits from intangible AI products. Translation: find a way to tax AI before it figures out how to register as a Delaware LLC and buy Congress.
4. Accelerate AI infrastructure permits.
Anthropic’s favorite topic—faster permits for data centers, power plants, and grid connections. Because if there’s one thing more urgent than helping humans adapt to AI, it’s helping AI get more compute faster.
Tyler Cowen of the Mercatus Center sums it up nicely: “I am all for permitting reform—the energy sector included.” Translation: “Let the GPUs flow.”
Category Two: Policies for “Moderate Acceleration” (A.K.A. The Middle-Class Meltdown)
5. Trade Adjustment Assistance, but for AI.
Economists suggest adapting the Trade Adjustment Assistance program—originally meant for workers displaced by globalization—into “Automation Adjustment Assistance.” Because nothing says we value you like rebranding unemployment insurance.
Fund it with taxes on large AI firms, say Suchet Mittal and Sam Manning. So Anthropic might one day pay taxes to help retrain the people Claude displaced. Adorable symmetry.
6. Taxes on compute or token generation.
Now we’re getting spicy. Economists Lee Lockwood and Anton Korinek propose taxes on “token generation, robots, and digital services.” The idea is to siphon off a bit of the AI sector’s absurd profit margins before they get fully reinvested into more GPUs.
Korinek notes that if AI starts consuming resources faster than humans, we may have to tax the AIs themselves. Imagine the IRS sending a bill to Claude. Now that’s the future we deserve.
Category Three: Policies for When It All Collapses (The “Oops, We Automated Civilization” Phase)
7. National Sovereign Wealth Funds.
Anthropic suggests that governments could take equity stakes in AI firms to ensure citizens get a slice of the pie. The idea: if AI generates trillions in value, everyone should get a dividend. The subtext: UBI, but make it venture-backed.
A UK proposal even suggests an “AI Bond” to help the government invest in and redistribute AI wealth. Imagine buying a bond that pays out every time Claude writes a press release.
8. Value-Added Tax (VAT).
The U.S. is famously one of the few developed countries without a VAT. Anthropic hints that AI could change that. As labor’s share of production shrinks, taxing consumption might become the only viable revenue stream.
Because when the bots take your job, at least they’ll still charge you sales tax on your DoorDash order.
9. New revenue structures for an AI economy.
Finally, Gamage reappears with a “low-rate business wealth tax” to complement income taxes. Think of it as a “management fee” governments charge for letting corporations exist.
If that sounds dystopian, that’s because it is—but at least it’s an elegant dystopia.
Anthropic’s Big Picture: Policy as PR
To Anthropic’s credit, they’re not pretending to have all the answers. They just want to “start the conversation.” Which is corporate PR code for: “We’re hedging our bets before the Senate hearings start.”
They’ve also pledged $10 million to expand their Economic Futures Program and host more symposia with economists and policymakers. In other words, expect more polished PDFs about the apocalypse.
What’s really happening here is brand positioning. Anthropic wants to be seen as the responsible AI company—the one thinking ahead, not just racing to release Claude 5. They’re framing themselves as the “grown-up in the room” while subtly lobbying for favorable regulation.
It’s smart, it’s self-serving, and it’s exactly how the AI industry white washes AI's disruption of virtually every industry.
The Real Subtext: “We Broke It, But We’ll Help You Fix It”
The post ends with a call for “proactive engagement between researchers, policymakers, and the AI industry.” Translation: we’d like a seat at the table when you start rewriting the economy.
It’s the perfect encapsulation of Silicon Valley logic:
- Disrupt everything.
- Profit massively.
- Offer thoughtful white papers on how to mitigate the disruption you caused.
Anthropic’s essay isn’t wrong—their economists make sharp, important points. But it’s also a reminder that the AI companies framing the debate are the same ones steering the ship toward the iceberg.
Still, if the robots do come for our jobs, at least they’ll be polite enough to publish a policy paper about it first.