Previous Blogs

July 18, 2024
The Rapidly Evolving World of AI PCs

July 10, 2024
Samsung Brings AI to Wearables and Foldables

June 19, 2024
HPE Shows Future of Presentations and Private Cloud GenAI

June 5, 2024
Computex Chronicles Part 6: MediaTek Highlights Its Hidden Connections

June 4, 2024
Computex Chronicles Part 5: Intel Strikes Back

June 3, 2024
Computex Chronicles Part 4: Qualcomm Reinforces Copilot+ PC Benefits

June 3, 2024
Computex Chronicles Part 3: Arm Unveils New Architectures and AI Libraries

June 3, 2024
Computex Chronicles Part 2: AMD Leaps into Copilot+ PCs and Outlines Infrastructure GPU Roadmap

June 2, 2024
Computex Chronicles Part 1: Nvidia Expands GenAI Vision

May 21, 2024
Dell Works to Make On-Prem and Hybrid AI a Reality

May 15, 2024
GenAI-Powered Agents Bring Promise of Digital Assistants Back to Life

April 23, 2024
Amazon Web Services Expands Bedrock GenAI Service

April 11, 2024
Google Integrates Gemini GenAI Into Workspace

March 26, 2024
Adobe Brings GenAI to Brands and Enterprise Creatives

March 19, 2024
Nvidia Advances GenAI Adoption

March 14, 2024
Arm and Cadence Push Software-Defined Vehicle Development Forward

February 29, 2024
Two Words That Are Critical to GenAI’s Future

February 20, 2024
Intel’s Gelsinger Describes a Different Kind of Foundry

February 1, 2024
How Will GenAI Impact Our Devices?

January 17, 2024
Samsung Focuses Galaxy S24 Upgrades on Software

2023 Blogs

2022 Blogs

2021 Blogs

2020 Blogs

2019 Blogs

2018 Blogs

2017 Blogs

2016 Blogs

2015 Blogs

2014 Blogs

2013 Blogs


















TECHnalysis Research Blog

 

August 14, 2024
Made by Google Event Highlights AI Software Advances

By Bob O'Donnell

Creating differentiation in a market like smartphones that’s filled with countless me-too products is not an easy thing to do. To their credit, Google managed to pull off that challenging task at the Made by Google event by focusing on AI-powered software capabilities found in their latest Pixel 9 line of phones.

The deep integration of their Gemini AI models, in particular, gave the company a way to demonstrate a variety of unique—and compelling—experiences that are now available on the full range of new Pixel 9 phones. From the surprisingly fluid conversational AI assistant feature of Gemini Live, through the transcription and summarization of voice calls in Call Notes and image creation capabilities of Pixel Studio, Google highlighted several practical examples of AI-powered features that regular people are actually going to want to use.

On the hardware side of things, Google managed to bring some differentiation as well—at least for their latest foldable. The 9, 9 Pro and 9 Pro XL all feature the traditional flat slab of glass smartphone shape (in 6.3” and 6.8” screen sizes) but offer a new camera design on the back that offers a modest bit of design change. On the updated 9 Pro Fold, however, the shorter, wider and thinner shape of the phone offers a clearly unique design versus foldable competitors like Samsung’s Galaxy Z Fold. (Interestingly, the Pixel 9 Pro Fold’s design is actually thinner and taller than the original Pixel Fold.)

The aspect ratio that Google has chosen for the outside screen of the 9 Pro Fold makes it look and feel identical to a regular Pixel 9, but still gives you the advantage of a foldable screen that opens up to a massive 8”. For someone who has used foldable phones for several years, that similarity to a traditional phone size for the unfolded form factor is a much more important change than it may first appear and something I’m eager to try.

Inside the Pixel 9 line is Google’s own Tensor G4 SOC, the latest version of its mobile processor line. Built in conjunction with Samsung’s chip division, the Tensor G4 features standard ARM  CPU and GPU cores but also incorporates Google’s proprietary TPU AI accelerator. The whole line of phones incorporates upgraded camera modules, with the 9 Pro and Pro XL, in particular, sporting new 50MP main, 48MP ultrawide and 42MP selfie sensors. The phones also offer higher amounts of standard memory with 16 GB the new default on all but the $799 Pixel 9 (which features 12 GB standard). This will be critical because AI models like Gemini Nano require extra memory to run at their best.

In truth, though, it isn’t in the hardware or the tech specs where Google made its most compelling arguments for switching to a new Pixel 9—it was clearly in the software. In fact, the Gemini part of the story is so important that on Google’s own site they list the official product names of the phones as Pixel 9 (or 9 Pro, etc.) with Gemini. As Google announced at its I/O event this spring, Gemini is the replacement for Google Assistant and it’s available for the Pixel 9 line now and will be coming to other Android phones (including Samsung, Motorola and others) later this month. While Google is somewhat notorious for swapping names and changing critical functionality on their devices (or for their services) on a regular basis, it seems they’ve made this shift from Google Assistant to Gemini in a more comprehensive and thoughtful manner.

First, Gemini shows up in several different but related ways. For the kind of “smart assistant” features that integrate knowledge of an individual’s preferences, contacts, calendar, messages, etc., the “regular” version of Gemini provides GenAI-powered capabilities that leverage the Gemini Nano model on device. In addition, Gemini Nano powers the new Pixel Weather app and the new Pixel Screenshots app, which can be used to manually track and recall activities and information that appears on your phone’s screen. (In a way, Screenshots is like a manual version of Microsoft’s Recall function for Windows, although it doesn’t automatically take the screen shots like Recall does.)

Gemini Live provides a voice-based conversational assistant that you can have complete conversations with. In its first iteration, it functions completely in the cloud and requires a subscription to Gemini Advanced (the first year of which is included with all but the basic Pixel 9). In multiple demos of Gemini Live, I was very impressed at how quickly and intelligently the Assistant (which can use one of 10 different voices) can respond—it’s by far the closest thing to talking with an AI-powered digital persona I’ve ever seen (except in the movies, of course). Unfortunately, Gemini Live doesn’t run on device just yet, meaning it can’t get access to the kinds of personalized information that the “regular” Gemini models running on devices can, but combining these two kinds of assistant experiences is clearly where Google is headed. And even still, the kinds of things you can use Gemini Live for, such as brainstorming sessions, researching a topic, prepping for an interview and much more look to be very useful on their own.

Yet another interesting implementation of Gemini Live came through Google’s new Pixel Buds 2, which were also introduced at the event. By simply tapping on one ear bud and saying “Let’s talk live” you can initiate a conversation with the AI assistant—one that can go on for an hour or more if you so choose. What’s intriguing about using ear buds instead of the phone is that it will likely trigger different kinds of conversations because it feels more natural to engage in an audio-only conversation with earbuds than it does holding and talking into a phone screen.

In addition to the Gemini-powered features, Google also debuted other AI-enabled software that’s unique to the Pixel including a photo Magic Editor that extends Google’s already impressive image editing capabilities to yet another reality-bending level. It really is getting harder to tell what’s real and what’s not when it comes to phone-based photography. For image generation, Google showed Pixel Studio, which leverages both an on-device diffusion model and a cloud-based version of the company’s Imagen 3 GenAI foundation model. Finally, the clever new Add Me feature lets you get a group shot by merging two different photos (one of you taking your subject(s) and one of them taking a photo of you in the same spot) using augmented reality to create a shot of everyone there—without having to ask a stranger to take it for you!

Ultimately, what was most impressive about Google’s launch event is that it managed to provide practical, real-world examples of AI-powered experiences that are likely to appeal to a broad range of people. At a time when some people have started to voice concerns about an AI hype cycle and the lack of capabilities that really matter, Google hammered home that the impact of GenAI is not only real, but compellingly so. Plus, they dropped hints of even more interesting capabilities yet to come. I have little doubt some speed bumps will accompany these launches—as they always seem to do with AI-related capabilities—but I’m now even more convinced that we’re just at the start of what promises to be a very exciting new era.

Here's a link to the original column: https://seekingalpha.com/article/4714451?gt=9a9894d6c94c153e

Bob O’Donnell is the president and chief analyst of TECHnalysis Research, LLC a market research firm that provides strategic consulting and market research services to the technology industry and professional financial community. You can follow him on LinkedIn at Bob O’Donnell or on Twitter @bobodtech.