Technalysis Research
 
Previous Blogs

September 24, 2025
Qualcomm Focuses on Agentic AI with Latest Chips

September 10, 2025
Arm Lumex Platform Lifts Smartphone AI

August 26, 2025
Nvidia Brings Blackwell to Robotics

July 17, 2025
AWS Puts Agent-Focused Platform Center Stage

July 9, 2025
Samsung’s Latest Foldables Stretch Limits

June 24, 2025
HPE’s GreenLake Intelligence Brings Agentic AI to IT Operations

June 18, 2025
AWS Enhances Security Offerings

June 12, 2025
AMD Drives System Level AI Advances

June 10, 2025
Cisco Highlights Promise and Potential of On-Prem Agents and AI

June 4, 2025
Arm Brings Compute Platform Designs to Automotive Market

May 20, 2025
Dell Showcases Silicon Diversity in AI Server and PC

May 19, 2025
Microsoft Brings AI Agents to Life

May 14, 2025
Google Ups Privacy and Intelligence Ante with Latest Android Updates

April 30, 2025
Intel Pushes Foundry Business Forward

April 29, 2025
Chip Design Hits AI Crossover Point

April 24, 2025
Adobe Broadens Firefly’s Creative AI Reach

April 9, 2025
Google Sets the Stage for Hybrid AI with Cloud Next Announcements

April 1, 2025
New Intel CEO Lays out Company Vision

March 21, 2025
Nvidia Positions Itself as AI Infrastructure Provider

March 13, 2025
Enterprise AI Will Go Nowhere Without Training

February 18, 2025
The Rapid Rise of On-Device AI

February 12, 2025
Adobe Reimagines Generative Video with Latest Firefly

January 22, 2025
Samsung Cracks the AI Puzzle with Galaxy S25

January 8, 2025
Nvidia Brings GenAI to the Physical World with Cosmos

2024 Blogs

2023 Blogs

2022 Blogs

2021 Blogs

2020 Blogs

2019 Blogs

2018 Blogs

2017 Blogs

2016 Blogs

2015 Blogs

2014 Blogs

2013 Blogs
















TECHnalysis Research Blogs
TECHnalysis Research president Bob O'Donnell publishes commentary on current tech industry trends every week at LinkedIn.com in the TECHnalysis Research Insights Newsletter and those blog entries are reposted here as well. In addition, those columns are also reprinted on Techspot and SeekingAlpha.

He also writes a regular column in the Tech section of USAToday.com and those columns are posted here. Some of the USAToday columns are also published on partner sites, such as MSN.

He also writes a 5G-focused column for Forbes that can be found here and that is archived here.

In addition, he also occasionally writes guest columns in various publications, including RCR Wireless, Fast Company and engadget. Those columns are reprinted here.

October 9, 2025
Intel’s Latest Chips Push Innovation Forward

By Bob O'Donnell

Given some of the challenges Intel has faced recently, there’s been an even larger than normal interest in the company’s next generation chips: codenamed Panther Lake for PCs and Clearwater Forest for servers. Not only are they to be the first manufactured with Intel Foundry’s latest 18A process, but they also come at a time when much of the tech industry and even semiconductor industry’s attention has been focused away from Intel and onto companies like Nvidia and AMD (because of their strength in GPUs for AI acceleration). That, in turn, led to questions about whether or not Intel could regain its status as a critical player in the chip industry.

Based on the specs and early performance numbers the company just unveiled for the two new families of chips, Intel made it emphatically clear that they and their products (and manufacturing technology) are still a significant force with which to be reckoned. The new Panther Lake mobile SOC (system on chip)—officially called Core Ultra 3—in particular, builds on the architectural, technological, and power efficiency enhancements the company first made with its most recent Core Ultra 2 processors (codenamed Lunar Lake) and takes these improvements even further. At the same time, Intel was able to address some of the structural limitations of Lunar Lake—including limited configuration choices for CPU and GPU and the fixed amount of on-chip system memory—so that its PC OEM customers now have much more flexibility in designing and configuring systems powered by Core Ultra 3. Plus, based on initial numbers provided by Intel, the new series can leverage many of the performance improvements from Core Ultra 200 series chips (codenamed Arrow Lake). Finally, it’s also interesting to note that Intel is building the software tools necessary to make Core Ultra 3 an important new option for the budding robotics and edge market, so it’s good to see the company’s looking ahead as well.

The new Clearwater Forest server part—which Intel is calling Xeon 6+—also builds on previous designs and adds key architectural enhancements. Notably, both the Xeon 6+ and the Core Ultra 3 take advantage of the company’s latest efficiency-oriented E-core design, codenamed Darkmont. This design includes large increases in cache sizes and numerous in-depth architectural improvements, both of which contribute to a 17% larger number of instructions per clock (IPC) that it can handle. In the case of the Xeon 6+, which Intel said will be brought to market in mid-2026, the chip only includes E-cores, but there are up to 288 of them per socket, twice as many as in the previous Intel Xeon 6. (As with previous generation Xeon product families, Intel will likely introduce a new series of server processors with the latest performance-oriented P-cores, but nothing has been announced just yet.)

On the Core Ultra 3, which is scheduled to ship later this year, one of the biggest improvements is three different base configurations that include different numbers of CPU and GPU cores. The baseline model features 8 CPU cores (4 each of the E-cores and P-cores), while the other two offer 16 cores (8 E-cores, 4 additional lower-power E-cores, and 8 P-cores). One of the two 16 CPU core models includes 4 of the newly enhanced Xe3 GPU cores (offering up to 50% better performance than the previous Xe2 GPU according to Intel) and is expected to be paired with an external GPU (Nvidia or AMD) for things like gaming PCs. The other includes 12 Xe3 GPU cores for more all-around graphics performance in applications like content creation tools. The baseline 8 CPU configuration incorporates 4 Xe3 GPU cores.

All the new Core Ultra 3 chips also include an enhanced NPU chip (NPU5) for AI acceleration tasks. At 50 TOPS, the new NPU only offers a modest increase over Intel’s previous 48 TOPS design in Core Ultra 2, but the company emphasized the new design takes up significantly less space and is more power efficient. Key to better AI performance is faster access to larger amounts of memory, and Intel addressed both of those issues in Core Ultra 3 as well. First, the company removed the memory from the on-die package (as it was in Lunar Lake/Core Ultra 2) and allowed OEMs to put up to either 128 GB of DDR5 SO-DIMMs or 96 GB of LPDDR5 memory in their designs. In addition, the company improved and tiered up the memory access speeds, ranging from 6400 MT/S (Mega Transfers per Second) on the low-end config with SO-DIMMS up to 9600 MT/S with LPDDR5 on the 16 CPU/12 GPU option.

Finally, on the manufacturing front, both the Core Ultra 3 and the Xeon 6+ incorporate several unique Intel Foundry capabilities. First, as mentioned, each of them includes elements—specifically the CPU cores—built on Intel’s own 18A process technology. Not only are the individual transistors smaller on the 18A process than previous manufacturing nodes (think of the company’s previous lead process technology Intel 3 as essentially 30A), but they also incorporate two key technologies—Ribbon FET and PowerVia—that allow them to be more efficient and performant than previous designs. Ribbon FET is Intel’s version of Gate All Around technology, which improves the current flow versus previous FinFET designs, and PowerVia lets power be distributed to the backside of a transistor, enabling more power-efficient designs.

To be clear, the entire Core Ultra 3 and Xeon 6+ are not built in 18A, only portions of it. In fact, some of the elements in both chips are still built with 3rd party foundries. But equally important to the new process technology is that both chips benefit from Intel Foundry’s leading chip-packaging technologies. These packaging techniques enable complex chiplet designs that incorporate multiple elements (such as the CPU, GPU and base tiles) to be put together in a single SOC package. In particular, these two chips leverage the company’s Foveros chip-stacking technology as well as high-speed EMIB connections. The key takeaway is that these manufacturing technologies both play a key part in driving the kind of performance and efficiency improvements in Intel’s latest chips and provide a solid example of the kind of manufacturing advantages Intel Foundry could provide to other 3rd party chipmakers.

Ultimately, what these announcements demonstrate is that Intel is not only able to execute on important technology advancements in both chip design and manufacturing, but that the company is listening to the demands of its customers and making the kinds of adjustments it needs to make to stay competitive. Despite their maturity, both the PC market and the server market continue to see new entrants, and for Intel to stay in any kind of leadership role, it needs to deliver the kind of capabilities that both Core Ultra 3 and Xeon 6+ look to offer.

Here's a link to the original column: https://www.linkedin.com/pulse/intels-latest-chips-push-innovation-forward-bob-o-donnell-83nhc

Bob O’Donnell is the president and chief analyst of TECHnalysis Research, LLC a market research firm that provides strategic consulting and market research services to the technology industry and professional financial community. You can follow him on LinkedIn at Bob O’Donnell or on Twitter @bobodtech.

Podcasts
Leveraging more than 10 years of award-winning, professional radio experience, TECHnalysis Research has a video-based podcast called TECHnalysis Talks.
LEARN MORE
  Research Offerings
TECHnalysis Research offers a wide range of research deliverables that you can read about here.
READ MORE