AMD Unleashes Buzzword Barrage in Attempt to Out-AI Nvidia
AMD promises 10x performance and 276x less rack usage—powered by buzzwords, benchmarks, and desperation to dethrone Nvidia.

At AMD’s Advancing AI 2025 event, the company finally did it—they created a press release so densely packed with acronyms, adjectives, and hopeful claims that you can actually train a model on it. It’s less a product announcement and more a Mad Libs for corporate engineers who wish they were Nvidia.
Let’s start with the headline: “AMD Unveils Vision for an Open AI Ecosystem.” Translation: “Please love us, hyperscalers.” From there, it’s a who’s-who of Alphabet Soup: MI350, MI355X, ROCm 7, EPYC Venice CPUs, Vulcano NICs, Pollara NICs, and the ever-glorious “Mixture of Experts,” which sounds less like a model architecture and more like a failed startup reality show.
AMD’s central pitch is this: Only they can deliver an “open, rack-scale AI infrastructure” using a combination of custom GPUs, CPUs, NICs, and the power of pure hope. Yes, their Instinct MI350X allegedly offers a 4x increase in generational compute and 35x increase in inferencing, which is totally not made up if you squint at footnote seven, interpret FP4 benchmarks like sacred scrolls, and ignore the small print that says, “server manufacturers may vary configurations, yielding different results.”
In other words: “Trust us, this time it’s different.”
To prove their growing influence in AI, AMD trotted out an all-star lineup of partner cameos, each delivering the corporate equivalent of a thumbs up emoji:
- Meta: “We use AMD for inference. Like… a lot. Seriously. Llama, llama, more llama.”
- OpenAI: “We totally use AMD. For real. Look, here’s Sam Altman. He said the word ‘holistic.’”
- Oracle Cloud: “We’re installing 131,072 GPUs, give or take a decimal. Try to keep up, Jensen.”
- Microsoft, Cohere, Red Hat, and HUMAIN: “We are very excited. Please cite us in your investor deck.”
And of course, to really drive home that “we too are changing the world,” AMD introduced Helios, their next-gen AI rack, which we’re told will deliver 10x performance and possibly enlightenment. Coming soon to a datacenter near you—after it passes through the appropriate thermal simulations, procurement hell, and driver updates.
Meanwhile, AMD’s open-source ROCm 7 stack will “dramatically improve developer experience,” which is tech PR for “we think this one might actually install without errors.”
All of this culminates in a grand vision: By 2030, AMD claims you’ll be able to train a trillion-parameter model using 95% less electricity and a single rack, presumably located in a carbon-neutral utopia powered by unicorn farts. Compared to 2024, that’s a 276x reduction in racks, which frankly sounds less like a projection and more like an aspirational hallucination.