Previous Blogs

April 2, 2019
Gaming Content Ecosystem Drives More Usage

March 26, 2019
PCs and Smartphones Duke it Out for Gaming Champion

March 19, 2019
PCs and Smartphones Duke it Out for Gaming Champion

March 12, 2019
Proposed Nvidia Purchase and CXL Standard Point to Data Center Evolution

March 5, 2019
Tech Standards Still Making Slow but Steady Progress with USB4 and WebAuthn

February 26, 2019
Second Gen HoloLens Provides Insights into Edge Computing Models

February 19, 2019
IBM’s Watson Anywhere Highlights Reality of a Multi-Cloud World

February 12, 2019
Extending Digital Personas Across Devices

February 5, 2019
Could Embedded 5G/LTE Kill WiFi?

January 29, 2019
Successful IT Projects More Dependent on Culture Than Technology

January 22, 2019
XR Gaming Market Remains Challenging

January 15, 2019
The Voice Assistant War: What If Nobody Wins?

January 8, 2019
Big CES Announcements are TVs and PCs

January 2, 2019
Top Tech Predictions for 2019

2017 Blogs

2016 Blogs

2015 Blogs

2014 Blogs

2013 Blogs


















TECHnalysis Research Blog

April 8, 2019
Intel Helps Drive Data Center Advancements

By Bob O'Donnell

At last week’s Intel Data-Centric launch event, the company made a host of announcements focused on new products and technologies designed for the data center and the edge. Given that it’s Intel, no surprise that a large percentage of those product launches focused on CPUs designed for servers—specifically, the second generation of the company’s Xeon Scalable CPUs, formerly codenamed “Cascade Lake.” However, as I’ll get to in a bit, the largest long-term impact is likely to come from something else entirely.

Similar to the first-generation launch of Xeon Scalable back in July of 2017, Intel focused on a very wide range of specific applications, workloads, and industries with these second-generation parts, highlighting the very specialized demands now facing both cloud service providers (CSPs) and enterprise data centers. In fact, they have over 50 different SKUs of Xeon Scalable CPUs for those different markets. They even added a dedicated new line of CPUs specifically focused on telecom networks and the needs they have for NFV (network function virtualization) and other compute-intensive tasks that are expected to be a critical part of 5G networks.

A key new feature of these second-generation Xeon Scalable CPUs is the addition of a capability called DL Boost, which is specifically designed to speed up Deep Learning and other AI-focused workloads. As the company pointed out, most AI inferencing is still done on CPUs. Intel is hoping to maintain that lead through the addition of new vector neural network instructions (VNNI) to the chip, as well as additional software optimizations it’s doing in conjunction with popular AI frameworks such as TensorFlow, Caffe, PyTorch, etc.

Despite all the CPU focus, however, the sleeper hit of the entire event, in my mind, was the release of Optane DC Persistent Memory, which works in conjunction with (and only with) the new Xeon Scalable CPUs. Based on a technology that Intel has been working on for 10 years and talking publicly about for about 1 year, Optane DC (short for Data Center) Persistent Memory is essentially a low-cost complement to traditional DRAM that allows companies to build servers with significantly more memory (and at a much lower cost) than would otherwise be possible. Available in 128, 256 and 512 GB modules (which fit into standard DDR4 DIMM slots), this new memory type adds an entirely new layer of storage and access hierarchy to existing server architectures by offering near DRAM-like speeds but with the larger capacities, lower costs, and persistence more similar to SSDs and other types of traditional storage.

In real-world terms, this means that memory-dependent large-scale datacenter applications, like AI, in-memory databases, content delivery networks (CDNs), large SAP Hana installations, and more, can see significant performance gains. In fact, at several different sessions with Intel customers who were early users of the technology, there was a tangible sense of excitement surrounding this new memory type and the benefits it provides. Quite a few discussed using Optane Persistent Memory with some of their toughest workloads and being pleasantly surprised with the outcome. As they pointed out, many of the most challenging AI workloads are more memory-starved than compute-starved, so opening up 6 TB of active memory in a two-socket server can make a very noticeable (and otherwise unattainable) impact on performance.

Optane Persistent Memory is also the first hardware-encrypted memory on the market, thanks to onboard intelligence Intel designed for the device. Intel provides two modes for the Persistent Memory to operate: the first, called Memory Mode, is a compatibility mode that lets all existing software run without any modification, and the second, called App Direct Mode, provides greater performance to applications that are adjusted to specifically work with the new memory type.

In addition to the Xeon Scalable and Optane announcements, Intel also discussed new intelligent Ethernet controllers designed for data center applications, and some of their first 10nm chips: the new Agilex line of FPGAs (Field Programmable Gate Arrays—essentially reprogrammable chips). Though they are typically only used for a limited set of applications, FPGAs actually have a great deal of potential as accelerators for AI and network-focused applications, among others, and it will be interesting to see how Intel continues to flesh out their wider array of non-CPU accelerators.

All told, it’s clear that Intel is now thinking about more comprehensive sets of solutions for data centers, CSPs, and other institutions with high-performance computing demands. It is a bit surprising that it took the company as long as it did to start telling these more all-encompassing stories, but there’s little doubt that it will be a key focus for them over the next several years. Yes, CPUs will continue to be important, but the reinvention of computing, memory, and storage architectures will undoubtedly yield some of the most interesting developments to come.

Here's a link to the column: https://techpinions.com/intel-helps-drive-data-center-advancements/54973

Bob O’Donnell is the president and chief analyst of TECHnalysis Research, LLC a market research firm that provides strategic consulting and market research services to the technology industry and professional financial community. You can follow him on Twitter @bobodtech.

Podcasts
Leveraging more than 10 years of award-winning, professional radio experience, TECHnalysis Research participates in regular audio podcasts in conjunction with the team at Techpinions.com.
LEARN MORE
  Research Offerings
TECHnalysis Research offers a wide range of research deliverables that you can read about here.
READ MORE