Technalysis Research
 
Previous Blogs

July 14, 2021
Microsoft’s Windows 365 Brings Cloud PCs to Life

June 29, 2021
MWC News Shows 5G Focus Shifting to Infrastructure

June 22, 2021
Global Foundries Fab Expansion Reveals New Strategy

June 16, 2021
Videoconferencing Challenge Looming

June 8, 2021
Cisco Extends Webex to Suite of Offerings

June 2, 2021
Computex News from AMD, Intel and Nvidia Demonstrates Strength of PC Suppliers

May 18, 2021
Google Smart Canvas Brings Integrated Collaboration to Workspace

May 11, 2021
IBM Simplifies Automation with Watson Orchestrate

May 5, 2021
Dell’s APEX Brings Hardware as a Service to the Mainstream

April 28, 2021
Arm Brings New Compute Options from the Cloud to the Edge

April 21, 2021
Apple Announcements Accelerate Custom Chip Transition

April 13, 2021
Nvidia Steps Up Enterprise and Automotive Efforts with GTC Announcements

April 6, 2021
AWS and Verizon Bring Private 5G and Edge Computing to Life with Corning

March 31, 2021
Cisco Wants to Make Hybrid Work Actually Work

March 23, 2021
Intel Reinvigorates Manufacturing Strategy with IDM 2.0

March 16, 2021
AMD Refocuses on Business with Latest Epyc and Ryzen Pro Launches

March 9, 2021
GlobalFoundries and Bosch Emphasize Shift in Automotive Semis

March 2, 2021
Microsoft Brings AI Appliances and Improved Connectivity to IoT

February 23, 2021
Cybersecurity Deal Highlights Benefits of 5G and AI in PCs

February 16, 2021
Will Conference Rooms Help or Hurt in the Return to Work?

February 9, 2021
The Ever-Present Need for Simplicity in Tech

February 2, 2021
Poly Makes Videoconferencing Personal

January 26, 2021
2021 Shaping Up to Be Big Year for Automotive Tech

January 12, 2021
What CES 2021 Says About Our Future

January 5, 2021
Big Tech Trends for 2021 Are Hybridization and Customization

2020 Blogs

2019 Blogs

2018 Blogs

2017 Blogs

2016 Blogs

2015 Blogs

2014 Blogs

2013 Blogs
















TECHnalysis Research Blogs
TECHnalysis Research president Bob O'Donnell publishes commentary on current tech industry trends every Tuesday at LinkedIn.com in the TECHnalysis Research Insights Newsletter and those blog entries are reposted here as well. In addition, those columns are also reprinted on Techspot and SeekingAlpha.

He also writes a regular column in the Tech section of USAToday.com and those columns are posted here. Some of the USAToday columns are also published on partner sites, such as MSN.

He also writes a 5G-focused column for Forbes that can be found here and that is archived here.

In addition, he also occasionally writes guest columns in various publications, including Fast Company and engadget. Those columns are reprinted here.

July 21, 2021
Amazon Drives Ambient Computing Forward with Alexa Enhancements

By Bob O'Donnell

The idea of having computing intelligence invisibly available all around us has been a part of science fiction for decades now. It’s also something that some people thought we could instantly bring to real life when the first smart speakers—notably Amazon’s Alexa-equipped Echo device—first debuted nearly seven years ago.

But it turns out it’s hard to enable ambient computing capabilities that live up to some of those futuristic visions—really hard. Thankfully, efforts to expand beyond the simple tasks of audibly requesting music to play, timers to be set, or providing factual answers to random questions continued apace. At this year’s third annual Alexa Live event—Amazon’s developer-focused ambient computing conference—the company debuted the largest range of new capabilities that it has ever added and, in the process, highlighted how impressively far this burgeoning category has progressed.

Since shortly after its debut, Amazon has allowed developers to extend the capabilities of its devices and the Alexa digital assistant through what the companies calls Skills, which are essentially audio applets that can be triggered by calling out certain keywords. The concept has clearly taken off as the company now cites the availability of more than 130,000 Alexa skills being offered from over 900,000 registered developers (some of whom are also involved with the hundreds of non-Amazon branded devices with Alexa built-in).

At this year’s event, the range of new capabilities that Amazon is bringing to its conversational AI platform highlights how much ambient computing has evolved. To start with, since the debut of the Echo Show, a number of Alexa-equipped devices are equipped with displays, allowing information to be presented visually as well as audibly. The introduction of APL (Alexa Presentation Language) widgets allows app developers to create content services that can display on these screens. In addition, Featured Skill cards offer a visual way for people to discover skills—sort of like an app store for skills that developers who apply to participate can leverage to promote their skills.

One of the early (and ongoing) challenges of working with smart speakers and other ambient computing devices is remembering how to trigger the skills you want to use. While some are relatively straightforward, it is also easy to forget or accidentally use the wrong trigger words. In the early days of smart speakers, this could be particularly troublesome. Amazon started to resolve this problem through what they call Name Free Interactions (NFI), which allows commonly used words to be recognized as triggers for various skills, essentially adding a degree of intelligence and flexibility to its usage. In other words, NFI made Alexa smart enough to understand what you meant to say instead of only precisely what you said.

At this year’s event, Amazon announced that it is extending the capabilities of NFIs in three different ways, including Featured Skills, which can link common utterances like “Alexa, tell me the news” or “Alexa, let’s play a game” to specific skills from various developers. In addition, personalized skill suggestions can connect individuals who commonly use particular phrases/requests, with several other relevant skills that offer similar capabilities. In essence, this functions like a recommendation engine because it will direct people to skills they don’t currently use or have installed. In a related way, Amazon has extended support for NFI to multi-skill experiences, where multiple experiences can be linked together and triggered by a single key word or phrase. What’s interesting about all of these capabilities is that they appear to be subtle tweaks to the original model of “launching” skills through specific keywords. However, they actually reflect a more profound understanding of the way people think and talk, which is critically important in enabling a more seamless, more intelligent experience.

In a related way, the new event-based triggers and proactive suggestions take the concept of ambient computing even further—though they also carry with them potential privacy concerns. Both of these capabilities leverage data like your physical location, time of day, whether or not you’re in a vehicle, and history of interactions to make suggestions about potential information (via automatically triggered skills) that can be provided. Fundamentally, this takes the notion of intelligence to a new level, because it reflects a greater awareness of your activities, habits, and surroundings and makes AI-powered recommendations based on all that data. At the same time, it raises fundamental questions about privacy and trust, because it requires Alexa to know a great deal about your comings and goings to make reasonable suggestions. Without that data, it could be spouting out data and suggestions in the dark, likely leading to extremely high frustration with the product. In addition, it raises fundamental trust issues between the customer and Amazon, some of whom may be uncomfortable with Amazon having access to all the data necessary to even make these suggestions.

Of course, these privacy and trust concerns hit at the very heart of any type of ambient computing model, all of which require some amount of personal data in order to make any experience compelling instead of frustrating. There is no easy answer here, and Amazon has been working hard to improve its trustworthiness with certain parts of the market. However, there are some consumers who are going to have a hard time ceding their trust to Amazon.

In terms of interoperability, Amazon also introduced a number of important new platform capabilities that make it easier to integrate the Alexa experience across a wide range of devices. Send to Phone, for example, will—as its name suggests—let you do things like send your requested results to an Alexa app-equipped mobile device. You can then continue working with this information or content on the mobile device or some other larger-screened device. At an even higher level, Amazon used the event to announce that all of its Echo devices would get an update that adds support for the new Matter smart home interoperability protocol. Matter is endorsed by a wide variety of big-name smart home device makers and tech companies (including Apple and Google) and is intended to serve as a means to make the process of discovering, connecting to, and controlling multiple smart home devices much easier.

Amazon also announced further enhancements to the intriguing Voice Interoperability Initiative (VII) that the company first debuted last year. Essentially a mechanism for integrating multiple voice assistants into a single solution, VII promises independence from a single voice provider, while simultaneously offering the promise of combining the best of different voice assistants into a single experience. Initially, this far-reaching concept will be productized via a new version of the Samsung Family Hub Refrigerator, which will integrate support for both Samsung’s Bixby and Alexa and will be able to switch between them on a dynamic basis.

Amazon introduced numerous other platform enhancements as well, including new mechanisms for paid skills and support for an Amazon Associates program that let skill developers monetize their efforts. The bigger story, however, is that through several years of what could be considered fairly modest updates to the Alexa platform, Amazon is finally starting to deliver experiences that much more closely match what many initially hoped for with the original Echo device.

Given the thinking behind these many announcements, it seems clear Amazon is serious about building intelligent everywhere computing initiatives that, hopefully, will make our working and personal lives significantly easier to navigate—and more rewarding as well. In the meantime, it will be interesting to see how a conglomeration of 50+ new features can make Alexa-powered devices more capable, more interesting, and easier to use.

Here’s a link to the original column: https://www.linkedin.com/pulse/amazon-drives-ambient-computing-forward-alexa-bob-o-donnell

Bob O’Donnell is the president and chief analyst of TECHnalysis Research, LLC a market research firm that provides strategic consulting and market research services to the technology industry and professional financial community. You can follow him on Twitter @bobodtech.

Podcasts
Leveraging more than 10 years of award-winning, professional radio experience, TECHnalysis Research participates in a video-based podcast called Everything Technology.
LEARN MORE
  Research Offerings
TECHnalysis Research offers a wide range of research deliverables that you can read about here.
READ MORE