Previous Blogs

June 3, 2024
Computex Chronicles Part 2: AMD Leaps into Copilot+ PCs and Outlines Infrastructure GPU Roadmap

June 2, 2024
Computex Chronicles Part 1: Nvidia Expands GenAI Vision

May 21, 2024
Dell Works to Make On-Prem and Hybrid AI a Reality

May 15, 2024
GenAI-Powered Agents Bring Promise of Digital Assistants Back to Life

April 23, 2024
Amazon Web Services Expands Bedrock GenAI Service

April 11, 2024
Google Integrates Gemini GenAI Into Workspace

March 26, 2024
Adobe Brings GenAI to Brands and Enterprise Creatives

March 19, 2024
Nvidia Advances GenAI Adoption

March 14, 2024
Arm and Cadence Push Software-Defined Vehicle Development Forward

February 29, 2024
Two Words That Are Critical to GenAI’s Future

February 20, 2024
Intel’s Gelsinger Describes a Different Kind of Foundry

February 1, 2024
How Will GenAI Impact Our Devices?

January 17, 2024
Samsung Focuses Galaxy S24 Upgrades on Software

2023 Blogs

2022 Blogs

2021 Blogs

2020 Blogs

2019 Blogs

2018 Blogs

2017 Blogs

2016 Blogs

2015 Blogs

2014 Blogs

2013 Blogs


















TECHnalysis Research Blog

June 3, 2024
Computex Chronicles Part 2: AMD Leaps into Copilot+ PCs and Outlines Infrastructure GPU Roadmap

By Bob O'Donnell

The second in the series of CEO speeches, but the official opening keynote for this year’s Computex was given by AMD CEO Dr. Lisa Su. AMD is in the unique position of being the primary competitor to Nvidia on GPUs (both for infrastructure-focused AI acceleration and in PCs) and to Intel for CPUs (both for servers and PCs) so their efforts are getting more attention than they ever have.

Given the company’s wide product portfolio, it’s probably not surprising to hear that Dr. Su’s keynote covered a surprisingly broad range of topics—and even a few that didn’t have to do with GenAI! Key to the news was a new CPU architecture called Zen 5, as well as a new NPU architecture called XDNA2. What’s particularly interesting about XDNA2 is that it supports something called the Block Floating Point (Block FP16) data type, which offers the speed of 8-bit integer performance and accuracy of 16-bit floating point. According to AMD it’s a new industry standard—meaning it’s something existing models can leverage—and AMD’s implementation is the first being done in hardware.

The Computex show has a long history of being a critical launching point for traditional PC components and AMD started things off with their next generation desktop PC parts—the Ryzen 9000 series—which don’t have a built-in NPU. What they do have, however, is the kind of performance that gamers, content creators and other DIY PC system builders are constantly in search of for traditional PC applications—and let’s not forget, that’s still important, even in the era of AI PCs.

Of course, AMD also had new offerings on the AI PC space—a new series of mobile-focused parts for the new Copilot+ PCs that Microsoft and other partners just announced a few weeks ago. AMD chose to brand them as the Ryzen AI 300 series to reflect the fact that they are the third generation of laptop chips built by AMD with an integrated NPU. A little-known fact is that AMD’s Ryzen 7040—announced in January of 2023—was the first chip with built-in AI acceleration and it was followed by the Ryzen 8040 at the end of last year.

The fact that a new chip was coming wasn’t a big surprise—AMD had even said so at the 8040 launch—but what was unexpected is how much new technology AMD has integrated into the AI 300 (which was codenamed Strix Point). It features the new Zen 5 CPU core, an upgraded GPU architecture they’re calling RDNA 3.5 and a new NPU built on the XDNA 2 architecture that offers an impressive 50 TOPs of performance.

What’s also surprising is how quickly laptops with Ryzen AI 300s are coming to market. Systems are expected in July of this year, just a few weeks after the first Qualcomm Snapdragon X-powered Copilot+ PCs are going to ship. One big challenge however is that the x86 CPU and AMD specific NPU versions of the Copilot+ software won’t be ready when these AMD-powered PCs first ship. Apparently, Microsoft didn’t expect x86 vendors like AMD and Intel to be done so soon and prioritized their work for the Arm-based Qualcomm devices. As a result, these will be Copilot+ “Ready” systems, meaning they’ll need a software upgrade—likely in the early fall—to make them full-blown next generation AI PCs.

Still, this vastly sped up timeframe—which Intel is also widely expected to announce for their new chips at their keynote this week—has been incredibly interesting and impressive to watch. Early on in the development of AI PCs, the common thought was that Qualcomm would have about a 12–18-month lead over both AMD and Intel to develop a part that met Microsoft’s 40+ NPU TOPS performance specs. The strong competitive threat from Qualcomm, however, inspired the two PC semiconductor stalwarts to move their schedules forward and it looks like they’ve succeeded. It’s one of the many reasons why the AI PC market has already proven to be an exciting (and inspiring) development to watch.

On the datacenter side of things, AMD previewed both their latest 5th Generation Epyc Gen CPUs (codename Turin) and their Instinct MI-300 series GPU accelerators. As with the PC chips, AMD’s latest server CPU products are built around their new Zen5 core CPU architecture with competitive performance improvements for certain AI workloads that are as much as 5x faster than Intel equivalents. For GPU accelerators, AMD announced the Instinct MI325 that offers twice the HBM3E memory of any card on the market. More importantly, as Nvidia did last night, AMD also unveiled an annual cadence for improvements for GPU accelerator line and offered details up to 2026. Next year’s MI350, which will be based on a new CDNA4 GPU compute architecture, will leverage both the increased memory capacity and this new architecture to deliver an impressive 35x improvement versus current cards. For perspective, AMD believes it will give them a performance lead on Nvidia’s latest generation products.

AMD is one of the few companies that’s been able to gain any traction against Nvidia for large-scale acceleration, so any enhancements to this line of products is bound to be well-received by anyone looking for an Nvidia alternative—both large cloud computing providers and enterprise data centers.

Taken as a whole, the AMD story continues to advance and impress. It’s kind of amazing to see how far the company has come in the last 10 years and it’s clear they continue to be a driving force in the computing and semiconductor world.

Here’s a link to the original article: https://www.linkedin.com/pulse/computex-chronicles-part-2-amd-leaps-copilot-pcs-gpu-bob-o-donnell-rm6jc/

Bob O’Donnell is the president and chief analyst of TECHnalysis Research, LLC a market research firm that provides strategic consulting and market research services to the technology industry and professional financial community. You can follow him on LinkedIn at Bob O’Donnell or on Twitter @bobodtech.