Previous Blogs

April 21, 2021
Apple Announcements Accelerate Custom Chip Transition

April 13, 2021
Nvidia Steps Up Enterprise and Automotive Efforts with GTC Announcements

April 6, 2021
AWS and Verizon Bring Private 5G and Edge Computing to Life with Corning

March 31, 2021
Cisco Wants to Make Hybrid Work Actually Work

March 23, 2021
Intel Reinvigorates Manufacturing Strategy with IDM 2.0

March 16, 2021
AMD Refocuses on Business with Latest Epyc and Ryzen Pro Launches

March 9, 2021
GlobalFoundries and Bosch Emphasize Shift in Automotive Semis

March 2, 2021
Microsoft Brings AI Appliances and Improved Connectivity to IoT

February 23, 2021
Cybersecurity Deal Highlights Benefits of 5G and AI in PCs

February 16, 2021
Will Conference Rooms Help or Hurt in the Return to Work?

February 9, 2021
The Ever-Present Need for Simplicity in Tech

February 2, 2021
Poly Makes Videoconferencing Personal

January 26, 2021
2021 Shaping Up to Be Big Year for Automotive Tech

January 12, 2021
What CES 2021 Says About Our Future

January 5, 2021
Big Tech Trends for 2021 Are Hybridization and Customization

2020 Blogs

2019 Blogs

2018 Blogs

2017 Blogs

2016 Blogs

2015 Blogs

2014 Blogs

2013 Blogs

TECHnalysis Research Blog

April 28, 2021
Arm Brings New Compute Options from the Cloud to the Edge

By Bob O'Donnell

If you had told me even just a few years back that semiconductor IP innovator Arm would be creating designs that could compete in high performance computing (HPC) and other super demanding applications, I probably wouldn’t have believed you. After all, Arm is primarily known for the power efficiency of its designs—hence it’s enormous success in smartphones and other battery-powered devices. Sure, the performance in Arm-powered smartphone chips, such as Apple’s A series for iPhones, Qualcomm’s Snapdragon line for Android devices, and others have been improving dramatically over the past few years, but there’s a big gap between smartphones and HPC.

However, things started to change when Arm unveiled its Neoverse N1 platform for cloud and datacenter-focused applications back in 2019. With the debut of the N1 CPU design, the company signaled strongly to its partners and the computing world overall that it was serious about making the move into the server market.

The effort notched a number of notable (and ongoing) design wins, including in AWS’ Graviton processor, which the cloud provider is now using (along with its successor the Graviton 2) for an increasingly broad range of different workloads. Still, much of the early efforts and successes focused on the power-efficiency of the Arm-based designs for cloud data centers—an important, but often overlooked factor in those environments.

Earlier this week, however, Arm further extended its Neoverse family with the launch of the V1 platform, which is specifically targeted towards high-performance applications. The company says the V1 offers an impressive 50% improvement in instructions per clock (IPC), a 2x increase in vector performance, and a 4x jump in machine learning performance versus the original N1. Part of the way Arm is achieving these new performance metrics is through the addition of Scalable Vector Extensions (SVE), which essentially enables custom instructions that can handle data blocks of any length to be added to CPU designs. More importantly, it allows code written for one type of vector length to be capable of running on hardware that may have a different vector length, thereby improving the flexibility and portability of the software involved.

In addition to the V1, Arm also unveiled the second generation N2, which is based on the same performance/watt-focused design of the original N1, but with a large range of microarchitectural enhancements. Notably, the N2 is also the first Arm CPU design to leverage their V9 architecture (see “Arm Lays Out Vision for Next Decade of Chips” for more). In addition, because of that new underlying architecture, it adds support for SVE2, the second generation of Arm’s Scalable Vector Extensions.

Practically speaking, all of this translates into a claimed 40% improvement in performance versus the original N1, all while maintaining the lower power consumption and thermals of the N1. Equally important, it provides proof of Arm’s intention to continue building out and advancing its range of server-focused chip designs. With the two new chip architectures, companies like Ampere and other Arm partners can choose to create datacenter focused SOCs that are either more concentrated on performance (but consume more power) or are more concentrated on energy efficiency, while still providing improved speeds.

Another critical but easy-to-overlook part of Arm’s announcements is the debut of their CoreLink CMN-700 Coherent Mesh Network for use in conjunction with these new CPUs. Built to enable more flexible chiplet-style designs, the CMN 700 provides the critical high-speed data connections between components on an SOC. For example, semiconductor makers who want to have more options for connecting custom accelerators, as well as advanced forms of memory and storage, can leverage Arm’s mesh network technology to do so. CoreLink CMN-700 now includes support for the CXL (Compute Express Link) and CCIX (Cache Coherent Interconnect for Accelerators) standards, as well as providing gateways that can link these buses to Arm’s own AMBA 5 CPU interconnect bus. The bottom line is a significantly more flexible range of design options that will let chip designers more easily piece together specialized parts using Lego block-like chunks of functionality.

Practically speaking, these capabilities allow companies like Marvell to use these new Arm technologies in products like their next-generation DPUs (Data Processing Units) for use in 5G and other high-speed networking applications, as well as Oracle for their Oracle Cloud infrastructure.

Because of the nature of their design process and the role they play in the industry, many of these new Arm innovations won’t be in real world applications until 2022 and beyond. Still, it’s good to see the company pushing the boundaries of cloud computing and edge computing performance beyond their original goals and it will be interesting to see where and how far their partners take these capabilities.

Here’s a link to the original column:

Bob O’Donnell is the president and chief analyst of TECHnalysis Research, LLC a market research firm that provides strategic consulting and market research services to the technology industry and professional financial community. You can follow him on Twitter @bobodtech.

Leveraging more than 10 years of award-winning, professional radio experience, TECHnalysis Research participates in a video-based podcast called Everything Technology.
  Research Offerings
TECHnalysis Research offers a wide range of research deliverables that you can read about here.