Technalysis Research
 
Previous Columns

April 25, 2017
Augmented reality: The disappointment is real

April 7, 2017
It's noisy in here! The coming problem of too many voice assistants

March 7, 2017
It may be time for a tech tax

February 17, 2017
Apple's transformation from bear to bull

January 13, 2017
Voice-controlled devices shift tech industry

2016 USAToday Columns

2015 USAToday Columns

2014 USAToday Columns
















USAToday Columns
TECHnalysis Research president Bob O'Donnell writes a regular column in the Tech section of USAToday.com approximately once every two weeks and those columns are posted here. These columns are also often reposted on other sites, including MSN and other publishing partners of USAToday.


May 31, 2017
Apple is next up to strut its artificial intelligence ambitions

By Bob O'Donnell

FOSTER CITY, Calif. — We’re in the heart of the tech conference season, in which giant players including Microsoft, Google, Facebook and next, Apple, lay out their visions for where their company futures—as well as the tech industry as a whole—are headed.

Looking at what’s been discussed to this point (and speculating on what Apple will announce at its Worldwide Developer Conference Monday), it’s safe to say that all of these organizations are keenly focused on different types of artificial intelligence, or AI. What this means is that each wants to create unique experiences that leverage both new types of computing components and software algorithms to automatically generate useful information about the world around us. In other words, they want to use real-world data in clever ways to enable cool stuff.

You may hear scary-sounding terms like convolutional neural networks, machine learning, analytics, and deep learning associated with AI, but fundamentally, the concept behind all of them is to organize large amounts of data into various structures and patterns. From there, work is done to learn from the combined data, and then actions of various types—such as being able to better interpret the importance of new incoming data—can be applied.

While some of these computing principles have been around for a long time, what’s fundamentally new about the modern type of AI being pursued by these companies is its extensive use of real-world data generated by sensors—such as still and moving images, audio, location, motion, etc.—and the speed at which the calculations on the data are occurring.

When done properly, the net result of these computing efforts is a nearly magical experience where we can have a smarter, more informed view of the world around us. At Google’s recent I/O event, for example, the company debuted its new Lens capability for Google Assistant, which can provide information about the objects and places within your view. In practical terms, Lens allows you to point your smartphone camera at something and have information about the objects in view appear overlaid on the phone screen. Essentially, it’s a form of augmented reality that I expect we will see other major platform vendors provide soon (hint: Apple).

Behind the scenes, however, the effort to make something like Lens work involves an enormous amount of technology, including reading the live video input from the camera (a type of sensor, by the way), applying AI-enabled computer vision algorithms to both recognize the objects and their relative location, combining that with location details from the phone’s GPS and/or WiFi signals, looking up relevant information on the objects, and then combining all of that onto the phone’s display.

Of course, there are thousands of other examples of potential AI-driven experiences, each of which requires different combinations of computing components and software. In fact, in this new computing era, we’re starting to see a remarkably diverse set of technologies being developed and used to enable these new activities.

Here are some of the companies involved.

On the chip side, for example, instead of just thinking about CPUs in the devices that are driving these experiences, we’ve seen companies like Nvidia successfully demonstrate how GPUs, which are traditionally only used for doing computer graphics, can be powerful new AI computing tools. Other GPU providers, such as AMD and chip intellectual property-licensing firm ARM have also started to introduce new offerings that are focused on artificial intelligence-based applications.

Not all artificial intelligence-based applications are primarily suited for GPUs, however. Companies like Qualcomm are also starting to demonstrate how components like DSPs (digital signal processors) can be very power-efficient tools for doing AI. Not to be outdone, chip powerhouse Intel now offers an entire range of different chip architectures, including dedicated AI chips from recently purchased Nervana, to FPGAs (field programmable gate arrays) and, yes, CPUs for doing different types of AI work.

On top of that, Google has introduced the second generation of their own TPU (Tensor Processing Unit), a chip specifically designed to speed up AI work. And Apple is  working on a Neural Engine chip specifically for doing AI on future generation iPhones and other devices, according to Bloomberg.

Ironically, in the midst of all this new technology, one of the other intriguing aspects of AI-driven applications is that they’re pushing our traditional computing devices into the background. Sure, we’re still often using things like smartphones to enable some of these experiences, but the ultimate goal of these advanced AI computing architectures is to make our technology become invisible. Voice-based computing and digital assistants are a step in this direction, but we’ll eventually see (hopefully!) small, discrete head-mounted displays and other new methods of interacting with a computing-enhanced and more contextually aware view of the real-world around us.

Though some have voiced concerns about the rapid encroachment of computer-driven artificial intelligence, it’s actually leading us to a very different type of computing future that promises to be less intrusive and more useful than what we’re currently experiencing. And that’s something to be excited about.

Here’s a link to the original column: https://www.usatoday.com/story/tech/columnist/2017/05/31/artificial-intelligence-apple-google-microsoft-facebook/102328920/

USA TODAY columnist Bob O'Donnell is the president and chief analyst of TECHnalysis Research, a market research and consulting firm that provides strategic consulting and market research services to the technology industry and professional financial community. His clients are major technology firms including Microsoft, HP, Dell, and Qualcomm. You can follow him on Twitter @bobodtech.