The tech and telecom sectors certainly keep churning our new announcements at a rapid clip. The biggest of these stories was, yes, you guessed it, the launch of the Apple VisionPro device. The sheer volume of coverage that Apple manages to generate around their product launches continues to impress and bely expectations. At this year’s edition of the WWDC event, Apple did launch a number of products. There were new Macs and a bunch of software related announcements but these didn't get much coverage for a reason. In other words, yawn!
The highlight of the event was the much-anticipated launch of the Vision Pro “spatial computing” device. There are a number of well-written and insightful analyses already published on this launch, here, here, here and here. The device itself, not surprisingly, comes with Apple’s signature industrial design and loads of components and new technology under the hood. 12 cameras, 6 microphones and 5 sensors are packed into the Vision Pro and while it seems much at first glance, it is indicative of the level of components needed to create a truly immersive experience. Apple has also innovated at the silicon level, including the M2 processor to handle heavy workloads but also introducing a new R1 chip that is dedicated to the numerous signals from the sensors, cameras and microphones embedded in the device. The R1 as a standalone unit will help reduce latency (reportedly to below 12 milliseconds), a crucial metric that will differentiate the user experience.
For our part, we were struck by the highlight on the term spatial computing. No mention of the metaverse, or should we call it the M-word? It is clear that Apple is not positioning this as a mere virtual reality head-mounted display (HMD), or even a combination AR/VR device, which in many ways it is. Rather, the focus on spatial computing is an attempt to define a new category altogether, both in terms of a new medium for computing but also in device/form factor terms. Indeed, spatial computing suggests a clean break from the rigid, 2D oriented form factors that we have become so accustomed to. Visions of truly immersive computing are hardly new and have been in pop culture references for some time. I recall a cheesy Hollywood movie about a bunch of teenagers creating a computer with a holographic OS, though for the life of me, I can’t remember the title or the actors involved in what was clearly not a memorable movie. Immersive experiences are also captured in a number of sci-fi movies, a more recent one being Blade Runner 2049.
The only way the Vision Pro and indeed the whole category works if users (whether consumer or enterprise) are convinced that they are better served with an expanded field of vision and the theoretical removal of distractions by eliminating stray light and ambient sound. However, this comes with the hard tradeoff. An immersive experience can also be viewed as an exercise in isolation in the post-Covid world. Also, wearing a heavy head mounted display (HMD) for several hours can become tiresome for many users.
Assuming that users can get over the tradeoffs and like the device and the quality of the immersive experience, this also means that new user behaviour will need to become commonplace through the new interface, almost like learning a new language. The Vision Pro keynote address stressed that the VisionOS (an entirely new OS created by Apple) would track users’ eyeballs as they moves across the field of vision, highlighting the app they rest their gaze on and with a pinch of two fingers, the app or page would open. This is essentially learned behaviour but with Apple’s track record, this should not be difficult for most people.
There are a number of open questions about this device. The biggest question and the one that will get the most attention is the eyebrow-raising starting price of $3,499. It is important to note that this is the starting price so there will be add-ons, presumably for the 2-hour battery pack and potentially other accessories. While most people assume that at this price, the Vision Pro will be a dud, Jay Goldberg notes correctly in his recent piece that even at 0.01% attach rates for the Vision Pro relative to total iPhones sold, this would translate to a mere 285,000 devices but still make for a $1 billion business!
In Asia, we believe there will be more than enough “early adopters” who won't care about the steep price or will not be able to resist the lure of the Vision Pro. But beyond the price, will there be enough content and will it be compelling? It is interesting to note that the Vision Pro will not be available for at least six months in the US, which means it will take at least a year before it is available in Asia and other regions. Disney’s CEO Bob Iger was part of the keynote, teasing an early arrival of Disney content on the Vision Pro platform. However, it’s clear that it will take time for developers to start churning out content and apps.
But from our perspective sitting here in developing Asia, there are bigger questions to ponder about the future of computing. In a region where computing is synonymous with the smartphone, the emergence of the Vision Pro as a new category will create a few ripples but very few waves. The affordability of the device relative to Meta’s Quest and other devices will certainly constrain adoption. However, if the device and its application ecosystem can ramp up suitably within the next five years, there will be an opportunity for users to consider investing in what will be an emerging medium for computing.
This blog also appeared in the Beyond the Next Billion newsletter dated 10th June 2023. Link
Comments