Last week, I posted about Virtual and Augmented Reality. That piece generated a fair amount of traffic. People really like the idea of Virtual Reality (VR) and Augmented Reality (AR), it clearly has a science fiction-made-real appeal for many. It also prompted an interesting exchange on Twitter, some of it public, about the technological limitations of AR and VR. One reader in particular pointed out the many challenges facing AR, and graciously took me on a tour deep into the weeds of what makes these systems work. All of this had led me to realize that we need to have some clearer definitions about what we are discussing. So in this post, I want to define some terms and propose a new framework for judging the market.
In that last post, I said the technology for VR is essentially ready, but with a caveat:
Once we start to get more content for VR (more about content in a moment), it may turn out that headsets actually need more algorithms or something (e.g. to reduce vertigo), and that may require even more expensive processors.
It turns out that the tech for VR today is a bit limited, but workable. However, the tech for AR is much further away than I thought. But only for ‘True AR’ as one commenter said. It turns out that the ‘AR’ systems we have seen so far – Yelp’s Monocle, Google Glass, etc. – are not really AR. They are sometimes called Mixed Reality (MR), to distinguish from true AR, which is still pretty far away.
Confused? I certainly think this is a bit muddled. So I want to attempt to clear things up a bit.
First, all this talk about the various Realities masks the deeper technical achievement. All of these systems are a new way of presenting information to humans. We live in reality, and these systems want to place another reality in between the world and our brains. This will not be a single new reality, but a multiverse of data layers that we will access in different ways.
The key distinction is not between Augmented and Virtual, there are going to be more than two ways of accessing this multiverse. Instead we will have a spectrum of use cases and technologies involved.
So I propose the following framework
The two dimensions that matter are:
- The balance of real world sensory input and digital content
- The extent to which the physical device cuts off other senses.
What we think of as “VR”, for instance playing a game wearing an Oculus Rift headset, is at the upper right corner of this chart. It is fully immersive, users are totally cut off from real-world physical inputs, all they can see and hear is what the system provides. Moreover, the content in a game is entirely digitally created. By contrast, something like Yelp’s Monocle is viewed on a separate device (i.e. a smartphone), meaning users can look away from it. And most of the image is of the real world, with a thin layer of data on top.
Loosely speaking, the solutions we have so far fit on a neat line, but this leaves open the opposite corners. No one knows exactly what Magic Leap is working on, but judging from the jumping whale demo on their home page, they seem to be moving into the upper left quadrant, somehow layering digital, graphical content onto the real world without the use of a fully-immersive headset.
The bottom right quadrant is also interesting. I think this is where people believe “True AR” is heading. Wearing something like Google Glass that lets you see both the real world and some very rich graphical content layered on top of that. Since my last post, I have come to appreciate just how hard this is to achieve technically. I highlighted the difficulty in connecting real world images to digital information, but that is only one problem. To make this work, the user experience has to be incredibly well-crafted. All of these technologies make use of various techniques to trick the brain into perceiving something as real. Without these tricks, the whole experience is at best unrealistic, and at worst can also be deeply uncomfortable causing headaches and nausea.
From what I can tell, the industry has gotten to the point that when we are fully immersed, and the system is providing all the images, it works well enough to convince the brain. Once we start to to overlay digital data on real world images the science gets much harder. One simple example is the digital information has to move as the user moves. Today’s systems still have enough lag that it does not really work. And that is just one problem. There are many more.
Personally, I think that a few more cycles along Moore’s Law gets us to the point that the graphics processors will be sufficiently small and fast to get these systems to work. But that is a guess, and there are plenty of people who think it will take a lot more effort and time.
In all of this, I think it is important to go back to first principles. Whatever the reality – Augmented, Virtual, Mixed, etc. – the goal is to change the way that humans interact with data. This is not a small change. And while it seems that many of the initial uses for VR are going to center on entertainment, the capabilities that these systems seek to provide goes much deeper than that. This is potentially the next advance in how we Analog Beings interact with the Digital.
I’m not getting your examples. Wouldn’t Magic Leap be bottom right, since it’s content layered over the real world, using a device that you’re (presumably) wearing? And you have Google Glass in the bottom left, which makes sense, but then you suggest three paragraphs later that it belongs in the bottom right?
I don’t think we know exactly where Magic Leap is going. I am basing this positioning by the demo video they show on their home page. Which is whale jumping out of the gym floor and a room full of teenagers screaming in delight. None of them are wearing goggles, so I guess they are trying for something which is very non-immersive, but wholly crafted from digital content. Putting them in the upper left. But again, I do not really know what they are planning.
As for Google Glass, this is a bit of a conetntious issue in the community. The reality of Google Glass today is pretty ho hum. It is a very thin digital layer, largely unconnected to the real world, but your are still wearing them. So that’s someplace in the middle. But people in the industry point out that this is not “True AR”. In true AR, the digital content you see in the glasses is much richer and much more closely tied to what you can see of the rest of the real world. That richer content pushes it to the right side, and since you can still see the real world, that pushes it towards the bottom of the chart.
Pingback: 【脑洞】平行宇宙=AR+VR+ 其他 | 加速会·
Pingback: 平行宇宙=AR+VR+其他 | EDAY24·
Pingback: 平行宇宙=AR+VR+其他 _ 三欧吉·
Pingback: 平行宇宙=AR+VR+ 其他 – 粹客·
Pingback: 平行宇宙=AR+VR+其他 | 虎扑·
Pingback: CES 2016 – The Digital Multiverse | DIGITS to DOLLARS·
Pingback: Revisiting the Digital Multiverse | DIGITS to DOLLARS·