June 6, 2023
By Bob O'Donnell
I’ve seen the future of computing. Again.
I was one of the lucky few who not only got to attend Apple’s Worldwide Developer Conference (WWDC) keynote presentation in person, but also got to try the new Apple Vision Pro headset for about a 25-minute hands-on, er, heads-on demo.
The experience was very good—as it certainly should be for a product that’s going to cost a whopping $3,499—but it was also a bit more similar to other devices I’ve tried over the years than I initially expected it to be.
Apple Vision Pro is the company’s first truly new computing platform device, and the biggest change in physical form factor, in about 10 years. While it looks to most people like the augmented reality (AR) and virtual reality (VR) headsets that we’ve seen companies like Meta, Microsoft, Magic Leap, Samsung, Lenovo and others introduce over the last few years, Apple refers to Vision Pro as a spatial computing device.
The idea is that Vision Pro’s extremely high-resolution display along with its natural user interface let you see and manipulate digital content of all types in the space in front of you. Practically speaking, that means it functions like a virtual monitor onto which you can place everything from traditional iOS apps to applications running on your Mac to immersive photos, videos and more.
At its best, Vision Pro can provide a perspective that’s hyper-realistic. The full-room screen size and spatial audio features combined with immersive video content specifically created for the device can make it feel like you’re experiencing something in-person—whether that’s visiting a remote mountain top, swimming underwater with sharks, watching live sports, and much more.
Applications like Apple’s interactive dinosaurs demo provide a means to view and even do simple interactions with digital objects floating in front of you. In addition, 3D movie content like the second Avatar movie that Apple demo’d is better than you’ve ever seen from either a theater or theme park experience.
The problem is that it can all be a bit overwhelming. As with previous AR and VR headsets, the initial demo with Vision Pro is super impressive—and for those who haven’t tried previous iterations from other vendors, the experience will initially be even more mind-blowing. After the roughly 25-minute demo was over, however, I was perfectly content with it being done.
Now, admittedly, this probably was due in part to the fact that Apple was trying to squeeze in a lot of different examples in a short time frame. But it honestly also reminded me of why I’ve never been a big long-term fan of computing devices that sit on my face and cover my eyes. They can get visually and mentally fatiguing fairly quickly.
To be clear, this problem isn’t unique to Apple. In fact, Apple’s implementation of the many different technologies needed to bring a product like Vision Pro to life is very impressive. I noticed none of the kind of visual delay-based challenges that have caused motion sickness-related issues in other types of VR headsets, for example. This is likely due to the new Apple-designed R1 chip whose purpose is to integrate the camera and sensor data from all the various elements that are built into the Vision Pro’s design.
However, it’s not clear that Vision Pro can overcome the challenges of the category. For many people, the product is likely to be something that they initially find amazing, but quickly start to tire of. The 2-hour battery limit from the wired, deck of card-sized battery pack that powers the Vision Pro certainly won’t help either, but I’m very curious to see how long people want to use the device for a given session. (If you want a longer session, you can also plug the device into the wall, but that obviously limits your mobility.)
Of course, a lot of this will depend on the types of applications that get developed for Vision Pro. Many of them initially seem compelling, including 3D photos and videos that you can capture directly from the Vision Pro’s built-in camera. However, several others felt like revisits of concepts that initially seemed very cool on products like the Magic Leap and Microsoft’s Hololens but turned out to be not that useful in real-world use.
One thing I was surprised by was the relative lack of demos that used 3D modeled objects that could overlay your view of the real world. One of the key benefits of Vision Pro is its passthrough video view, which allows you to clearly see the real-world (and people) around you when you don’t have apps or other content in the virtual screen in front of you. But Despite Apple’s six years of work on its AR Kit technology for iPhone and iPad developers, however, there weren’t a lot of examples of it in use in the demos that Apple showed.
In fact, probably the biggest surprise for me about the Vision Pro was that it felt more like a fully immersive VR headset experience than I expected it to be. Now, from a pure definition perspective, VR has typically meant complete screen immersion where you can’t see any of the real-world in front of you. AR, on the other hand, typically is thought of as primarily a real-world view with a few digital objects placed within that environment.
Many of the Vision Pro demos and much of the brief time I spent with it combined those two into what is often called mixed reality (MR). Ultimately, though it was more of an immersive VR experience in which you can see some of the real-world around you, but most of it felt VR-like. From the initial setup of the device and onward, I couldn’t help but think of my previous experiences with VR headsets.
Ironically, thanks to the extremely high-quality screens inside the Vision Pro, these are some of the most impressive experiences offered. The Environments app that gives you a 360-degree view of various nature scenes, complete with surround sound-style spatial audio, for example, was amazing. However, Apple (and many others in the industry) have clearly been trying to move away from VR and focus more on AR because of how visually tiring, and in some cases disorienting, a VR experience can be.
All of which takes me back to some of my initial concerns. Apple has clearly put an immense amount of time and effort into Vision Pro, and in many impressive ways, it shows. Plus, there are a ton of features like the Eyesight view for people in a room to know you’re looking at them through the lenses to the somewhat robotic digital avatars for Facetime calls that I don’t have the space to get into. In addition, the company didn’t show any demos with Siri-based voice control—a feature that could either be a great enhancement (especially if driven by an updated generative AI-enhanced version of Siri) or a frustrating disappointment if it still uses the current version of Siri.
At the end of the day, I’m not fully convinced that I haven’t heard this story before and that the story won’t end with the same kind of modest impact that other similar types of products have had. To its credit, the technology in Apple’s Vision Pro is significantly better than any previous iterations of spatial computing devices, but the long-term impact for the company and the category remains to be seen.
Here’s a link to the original article: https://www.techspot.com/news/98965-got-play-apple-vision-pro-saw-future-computing.html
Bob O’Donnell is the president and chief analyst of TECHnalysis Research, LLC a market research firm that provides strategic consulting and market research services to the technology industry and professional financial community. You can follow him on LinkedIn at Bob O’Donnell or on Twitter @bobodtech. |