Technalysis Research
 
Previous Blogs

March 17, 2017
Microsoft Unveils Teams Chat App

March 14, 2017
Computing on the Edge

March 7, 2017
Cars Need Digital Safety Standards Too

February 28, 2017
The Messy Path to 5G

February 24, 2017
AMD Launches Ryzen CPU

February 21, 2017
Rethinking Wearable Computing

February 17, 2017
Samsung Heir Arrest Unlikely to Impact Sales

February 14, 2017
Modern Workplaces Still More Vision Than Reality

February 10, 2017
Lenovo Develops Energy-Efficient Soldering Technology

February 7, 2017
The Missing Map from Silicon Valley to Main Street

January 31, 2017
The Network vs. The Computer

January 27, 2017
Facebook Adds Support For FIDO Security Keys

January 24, 2017
Voice Drives New Software Paradigm

January 20, 2017
Tesla Cleared of Fault in NHTSA Crash Probe

January 17, 2017
Inside the Mind of a Hacker

January 13, 2017
PC Shipments Stumble but Turnaround is Closer

January 10, 2017
Takeaways from CES 2017

January 3, 2017
Top 10 Tech Predictions for 2017

2016 Blogs

2015 Blogs

2014 Blogs


2013 Blogs

















TECHnalysis Research Blog

March 21, 2017
Chip Magic

By Bob O'Donnell

Sometimes, it just takes a challenge.

After years of predictable and, arguably modest, advances, we’re beginning to witness an explosion of exciting and important new developments in the sometimes obscure world of semiconductors—commonly known as chips.

Thanks to both a range of demanding new applications, such as Artificial Intelligence (AI), Natural Language Processing (NLP) and more, as well as a perceived threat to Moore’s Law (which has “guided” the semiconductor industry for over 50 years to a state of staggering capability and complexity), we’re starting to see an impressive range of new output from today’s silicon designers.

Entirely new chip designs, architectures and capabilities are coming from a wide array of key component players across the tech industry, including Intel, AMD, nVidia, Qualcomm, Micron and ARM, as well as internal efforts from companies like Apple, Samsung, Huawei, Google and Microsoft.

It’s a digital revival that many thought would never come. In fact, just a few years ago, there were many who were predicting the death, or at least serious weakening, of most major semiconductor players. Growth in many major hardware markets had started to slow, and there was a sense that improvements in semiconductor performance were reaching a point of diminishing returns, particularly in CPUs (central processing units), the most well-known type of chip.

The problem is, most people didn’t realize that hardware architectures were evolving and that many other components could take on tasks that were previously limited to CPUs. In addition, the overall system design of devices was being re-evaluated, with a particular focus on how to address bottlenecks between different components.

Today, the result is an entirely fresh new perspective on how to design products and tackle challenging new applications through multi-part hybrid designs. These new designs leverage a variety of different types of semiconductor computing elements, including CPUs, GPUs (graphics processing units), FPGAs (field programmable gate arrays), DSPs (digital signal processors) and other specialized “accelerators” that are optimized to do specific tasks well. Not only are these new combinations proving to be powerful, we’re also starting to see important new improvements within the elements themselves.

For example, even in the traditional CPU world, AMD’s new Ryzen line underwent significant architectural design changes, resulting in large speed improvements over the company’s previous chips. In fact, they’re now back in direct performance competition with Intel—a position AMD has not been in for over a decade. AMD started with the enthusiast-focused R7 line of desktop chips, but just announced the sub-$300 R5, which will be available for use in mainstream desktop and all-in-one PCs starting in April.

nVidia has done a very impressive job of showing how much more than graphics its GPUs can do. From work on deep neural networks in data centers, through autonomous driving in cars, the unique ability of GPUs to perform enormous numbers of relatively simple calculations simultaneously is making them essential to a number of important new applications. One of nVidia’s latest developments is the Jetson TX2 board, which leverages one of their GPU cores, but is focused on doing data analysis and AI in embedded devices, such as robots, medical equipment, drones and more.

Not to be outdone, Intel, in conjunction with Micron, has developed an entirely new memory/storage technology called 3D Xpoint that works like a combination of DRAM—the working memory in devices—and flash storage, such as SSDs. Intel’s commercialized version of the technology, which took over 10 years to develop, is called Optane and will appear first in storage devices for data centers. What’s unique about Optane is that it addresses a performance bottleneck found in most all computing devices between memory and storage, and allows for performance advances for certain applications that will go way beyond what a faster CPU could do.

Qualcomm has proven to be very adept at combining multiple elements, including CPUs, GPUs, DSPs, modems and other elements into sophisticated SOCs (system on chip), such as the new Snapdragon 835 chip. While most of its work has been focused on smartphones to date, the capabilities of its multi-element designs make them well-suited for many other devices—including autonomous cars—as well as some of the most demanding new applications, such as AI.

The in-house efforts of Apple, Samsung, Huawei—and to some degree Microsoft and Google—are also focused towards these SOC designs. Each hopes to leverage the unique characteristics they build into their chips into distinct features and functions that can be incorporated into future devices.

Finally, the company that’s enabling many of these capabilities is ARM, the UK-based chip design house whose chip architectures (sold in the form of intellectual property, or IP) are at the heart of many (though not all) of the previously listed companies’ offerings. In fact, ARM just announced that over 100 billion chips based on their designs have shipped since the company started 21 years ago, with half of those coming in the last 4 years. The company’s latest advance is a new architecture they call DynamIQ that, for the first time, allows the combination of multiple different types and sizes of computing elements, or cores, inside one of their Cortex-A architecture chip designs. The real-world results include up to a 50x boost in AI performance and a wide range of multifunction chip designs that can be uniquely architected and suited for unique applications—in other words, the right kind of chips for the right kind of devices.

The net result of all these developments is an extremely vibrant semiconductor market with a much brighter future than was commonly expected just a few years ago. Even better, this new range of chips portends an intriguing new array of devices and services that can take advantage of these key advancements in what will be exciting and unexpected ways. It’s bound to be magical.

Here's a link to the column: https://techpinions.com/chip-magic/49188

Bob O’Donnell is the president and chief analyst of TECHnalysis Research, LLC a market research firm that provides strategic consulting and market research services to the technology industry and professional financial community. You can follow him on Twitter @bobodtech.

Podcasts
Leveraging more than 10 years of award-winning, professional radio experience, TECHnalysis Research participates in regular audio podcasts in conjunction with the team at Techpinions.com.
LEARN MORE
  Research Offerings
TECHnalysis Research offers a wide range of research deliverables that you can read about here.
READ MORE