So many devices, such little time! Here’s how you can navigate the expansive VR/AR hardware landscape.

Different Types of AR & VR Devices — Navigating the Spatial Computing Landscape

Bilawal Sidhu
13 min readMar 26, 2017

--

The terms Virtual Reality (VR), Augmented Reality (AR), Mixed Reality (MR) are getting thrown around alot these days. You’ve got a plethora of devices… Oculus Rift, HTC Vive, Samsung Gear VR, Google Glass, Microsoft HoloLens, ODG R-8, Epson Moverio… the list goes on.

How are these devices different? What overlaps do they have? How on earth do you develop for them? Take a deep breath, and let’s take a look at the spectrum of VR/AR hardware out there and see how it all fits together.

We’ve all seen it — the spectrum of spatial/immersive computing

AUGMENTED REALITY

Placing The Virtual Inside Your World

“I need your clothes, your boots and your motorcycle”

At first glance (pun intended) Augmented Reality sounds a lot like terminator vision. AR has been a technology military and defense have long utilized with great success. In fact, companies like APX Labs (now Upskill) actually made “terminator vision” in real life for the US Military.

Terminator Vision allowed soldiers wearing AR glasses to identify potentially hostile parties in crowds by incorporating facial recognition software and background check services — Brian Ballard, APX Labs CEO

If you look at Meta — it’s a similar story with their founder coming out the Israeli Defense Forces Technology Unit (so badass).

You now find a lot of these key players moving beyond defense, doubling down on the consumer and enterprise AR revolution. Given the proliferation of cheap computing power and sensors thanks to mobile supply chain, we’ve got several types of AR out on the market, including “Mixed Reality” as HoloLens and Magic Leap call it. Here’s how to make sense of it all:

Mobile AR with Feature Tracking

This is the most accessible type of Augmented Reality today you can already get going on most modern smartphones and tablets using IMU data and that beautiful RGB camera. There are two kinds of mobile AR approaches — marker-based tracking or marker-less tracking.

Pass-through AR can let a children’s storybook “come to life.”

Marker tracking usually involves placing a QR code or some type of predictable pattern the software knows to look for, tracking it on-the-fly and overlaying CG objects. Picture a magazine cover, play card or children’s playbook “coming to life” when looking through the “window” of a phone.

Arguably the most ubiquitous example of pass-through AR on a mobile device — Pokemon Go!

For Mobile AR, you’re usually dealing with Blippar or Qualcomm’s Vuforia. Kudan and Wikitude are also key players in the game. These AR SDKs work in concert with a game engine like Unity3D. If you keep your application simple enough, you can come up with some novel uses for mobile AR, but you need to have some identifiable pattern or a “feature rich” environment for tracking to work well — meaning it’s kind of gimmicky, IMHO of course. This is partially why you find such dissent in the MR/AR community to acknowledge Pokemon Go as “real” AR despite it being a ubiquitous example of just that.

“Look ma no QR codes!” — Markerless Tracking in action

Mobile AR with Depth Sensors — Structured Light, Stereo Cam & ToF

This is much like the concept of a “window” looking into the real world, “augmented” the view with CG objects. Except, now that you’ve got a depth sensor involved (think Xbox Kinect or ZED cam), you can do all your tracking without printed QR codes, patterns or a “feature rich” environment.

SLAM in action

It’s a far more robust approach, because the software has a much deeper understanding (pun intended) of the space around you and your relative position, using very slick SLAM algorithms. Without getting too deep into it, SLAM builds a model of the 3D space around you as you’re exploring it. Hence the abbreviation to “Simultaneous Localization And Mapping” i.e. locating where you are in 3D space whilst (simultaneously) mapping the environment to build a progressively improved model of it.

Google Tango showcasing 6-DOF inside-out (untethered) positional tracking

Current available examples of this are the Structure Sensor on iOS devices, ZED Camera for Windows/Linux and Google Tango tablets/phones for Android. However, at VRX 2017 Qualcomm and PMD Technologies (folks who build the ToF sensors for Google Tango), said their hardware is finding their way into other products later this year, so we should see a lot more phones with this type of depth-sensing tech right out-of-the-box. This is undoubtedly going to unlock more than just bokeh-laden photographs. Hello untethered AR and VR!

Also consider that in late 2013, Apple acquired PrimeSense (the folks who make literally all 3D sensors in world, including the ones in the Xbox Kinect, Structure Sensor etc) as well as Metaio, an AR platform that’s been doing this style of “window-into-the-world” AR for a long time. Needless to say, this tech should be making it’s way into the next iPhone or iPad. We can argue about exactly when… I know Scobleizer’s got his fingers crossed on 2017, but regardless of the exact timing, the key takeaway is that the transition to spatial is inevitable. It’s not a question of if, but when?

Monocular Head-mounted AR (or Monocular Smartglasses)

Vuzix M3000- Enterprise AR workhorse

In this category, think Google Glass or Vuzix M3000, where you’ve got a transparent display on just one of your eyes… hence the term monocular. Basically, you’ve got a see-through smartphone taped to your face. It’s out of your line of sight, letting you focus on the task at hand, but keeps the display available to get glanceable information. For example, a surgeon keep tracking of the patients vitals. It’s cool, but only for mission-critical enterprise applications. No wonder Google Glass didn’t fly with consumers — not everyone wants to walk around looking like a cyborg.

Jean Claude Van Damme rocking binocular AR glasses in Universal Soldier (1992).
Epson Moverio BT-300 Smartglasses

Binocular Head-mounted AR (or Binocular Smartglasses)

ODG’s R-7 Smartglasses

Think Epson Moverio, where you’ve now got 1+1 = two transparent displays — giving you stereoscopic vision. You’ve got ability to overlay stuff right in your field of view. Awesome to overlay basic 2D and 3D graphics, telestrate and more. The downfall however is tracking. Most smartglasses in this category lack 3D depth sensors, so they’re not extremely aware of your surroundings, leaving you to use a normal camera combined with onboard IMU sensors to make sense of position and orientation. A good chunk of smartglasses in this category also need to be tethered to a smartphone to use. The most high-end of this lot would be something like the ODG R-7, which NASA is sending up the the space station.

Field-service expert getting remote assistance during repairs. Catch my mini cameo and some of my work from Deloitte.

Again, only cool for mission-critical enterprise applications. Picture a field service expert trying to fix something on an oil-rig, getting remote assistance from someone sitting in Germany, highlighting exactly which valve to fix.

Mixed Reality HMD — Binocular HMDs with a crap load more sensors

Look at all that hardware squeezed into one headset

Here’s where things get fun. Think Meta Space Glasses, Microsoft HoloLens and the latest from ODG. We’re now talking an all-in-one fully-inclusive standalone system. You’re getting a much better display and the ability to render 3D on-board, but also depth sensors that make sense of your environment to correctly overlay these objects, so they seem “fixed” in the real world. HoloLens, particularly is a great example of this, arguably best-in-class.

However, the main shortcoming right now is field-of-view. The HoloLens developer kit currently on the market has a pithy 30 x 17 degree field of view. That’s really weak, but it’s only a matter of time until it improves. Microsoft has been pretty tight lipped about the the FOV on the next HoloLens dev kit. In my opinion, even if they double that to 60 degrees, it’d be a huge step up for more immersive applications.

Meta offers a greater FOV than the HoloLens, but has subpar positional and hand tracking. The upcoming ODG R-9 promises a 50 degree field of view, however ODG clearly admits they “weren’t built for the same level of tracking quality as HoloLens.

It’s then no wonder Microsoft acquired a slew of patents from ODG back in 2014. That gave Microsoft the missing pieces. Supplemented with their head start in Computer Vision (thanks to Kinect), Microsoft is at present the real AR juggernaut… Better step up soon Apple!

HoloLens is undoubtedly the most complete augmented reality err… mixed reality headset on the market. If you want to get started in mixed reality development, it’s the best platform you can choose. Many would agree that the HoloLens Dev Kit is to Augmented Reality what the Oculus DK2 was to Virtual Reality — sure there was stuff before it, but it’s arrival really set the bar for what the medium could be — an entirely new computing platform.

Mixed Reality Photon Projection Action

This section is basically reserved for Magic Leap and research done by Tom Furness back in the 90s (which actually forms the basis of some of Magic Leap’s light field display technology today). Key point to note is that all types of VR and AR covered in this article and out there in the wild involve a flat screen at a fixed distance from your eyes.

The Magic Leap approach is drastically different — promising to project photons directly into your eye, giving you one huge benefit: a massive field of view bigger than an Oculus CV1. They also advertise the ability to shift focus naturally, as you do in real life. To accomplish the same effect on current devices, you need to do some fancy eye tracking, which people are no doubt working on, but won’t make it’s way into consumer hardware until 2018. Having said that, who knows when the Magic Leap will be out, as it too relies on eye tracking as a part of its stack. They’ve been saying they’re prepping to “ship millions of things” for a while now, and all we have to go off are indirect comments of people who have viewed this under a strict NDA, not to mention some supposed leaks.

Tim Sweeney said visiting Magic Leap down in Florida was like visiting Xerox PARC back in day and seeing the first Graphical User Interface (GUI). Sweeney says he “saw things (he) didn’t think was possible.” Taking this analogy further, lootsauce on Reddit put it best when he said “if Magic Leap is Apple, Tom Furness’ HITLab is Xerox PARC.”

“Apple innovated (AKA productized) with what Xerox PARC had invented. Likewise Magic Leap is productizing what Furness and his HITLab invented. There is a demo they did in 2001 called Magic Book, the tv announcer says “you can virtually leap into the scenes.” — Lootsauce

It’s no surprise that Brian Schowengerdt, co-founder of Magic Leap, was a research scientist at Furness’ HITLab. Regardless of the origin story, here’s hoping this magical stuff comes out sooner than later. I know I’d buy it.

VIRTUAL REALITY

Placing You Inside The Virtual Realm

2016 is was touted by many as the year of virtual reality. I’m not gonna lie, that’s probably true, given that some of the biggest names in tech released the first generation of consumer VR products, and now they’ve fully mobilized their massive PR machines to make sure every single one of you has heard of VR by the end of 2017. So what kinds of VR devices are there exactly?

Smartphone-Powered Mobile VR — Swivel Chair VR FTW!

Swivel away my friends

In essence, Mobile VR involves you taking your smartphone and turning it into a head-mounted display (HMD). This can be as crude as a $20 Google Cardboard or as polished as a $79 GearVR. Think of it as a trailer for high-end VR and an excellent method to passively consume content — whether that’s 360 videos and photos or chillin’ in front of the most epic fireplace watching Netflix with your buds.

The pros of Mobile VR are that that the cost of entry is super low, since a lot of folks have relatively fast smartphones. Phones with 4K displays can consume passive content at a similar quality to their more expensive brethren. And most importantly — no wires involved! Untethered VR is fun as heck and you can take it with you on that long flight to New York or New Delhi. Most experts agree that Mobile VR is going to be the mass-market VR experience for 2017. Consider that not everyone is going to upgrade their computer to support the specs of the Oculus or Vive, neither is everyone going to go buy a PSVR.

The cons of Mobile VR, where it stands today is that you’re limited to Three-Degrees-of-Freedom (3DOF in short). Meaning, you sure you can look (or swivel) around in different directions, but you can’t lean or move your head around, as you would in real life (no parallax). This is going to change by the fall of this year (more on this later), but for now swivel chair VR is the way to go on mobile. Mobile also means we’ve got to optimize experiences to degrade gracefully on slower mobile hardware. The Snapdragon profiler will quickly become your friend as you identify bottlenecks. At the end of the day, you’re building an Android or iOS app, so mobile development skills are a must and this isn’t everyone’s cup of tea.

Google DayDream includes a tracked remote controller

Lastly, you’ve got limited input methods on mobile VR. With Cardboard, you have to use the user’s gaze (what they’re looking at), and *gasp* one button. The Samsung GearVR gives you a more robust d-pad, with the ability to swipe in 2 directions and tap. Google DayDream goes a step further and includes a tracked remote that gives you even more flexibility to interact with virtual environment (think laser pointer to select stuff).

Tethered Desktop/Console VR — The Dream of VR Realized

Here we’re talking about a $350–400 head-mounted display (HMD) like the Oculus Rift or HTC Vive, connected to a powerful desktop computer or a Sony PSVR connected to a PlayStation 4. Most HMDs are also including 6-DOF input controllers. Not only is your head position and orientation being tracked, but you can get your hands into VR. HTC Vive and PlayStation VR ship with 6-DOF input controllers. A slew of OEM-powered “Windows Holographic” headsets are also slated to hit the market this year.

London Heist on the Playstation VR

The pros are you’ve got a discrete graphics card and fast processor to make even more immersive experiences. You’ve got a total of 18-DOF to allow the user to interact with virtual worlds in a manner never before possible. Consider that the mouse gives you 2-DOF, a point often made by Philip Rosedale of High Fidelity. You can lean up close to that CG object on the table. Load clips of an Uzi and unload it on gangsters on the streets of London. It’s quite beautiful, really.

The cons are you got a thick set of wires connected to quite an expensive computer. Wireless add-ons are here, but they’re pricey and far from pervasive. You’re also dealing with a limited user base. Analysts can argue about how many HMDs are going to sell in 2017, but mobile VR is going to far outnumber anything on the desktop and console side. Facts. Just as mobile VR platforms makes the baseline VR experience accessible at home, brick & mortar VR-centric establishments (think Void, Universal Studios or even a cyber cafe) will make high-end VR accessible outside of the home.

Untethered Standalone VR — Best of Both Worlds

What if you took the best mobile hardware components and utilized it to create an standalone all-in-one VR headset, something that sits firmly between smartphone-powered VR experiences like Google DayDream and it’s more powerful Desktop VR brethren?

Well for starters, overheating/thermals won’t be a huge issue *cough* GearVR *cough* You wouldn’t needlessly be stacking dies on top of each other, as is commonplace when building a smartphone. You can utilize room in entire headset to squeeze in a larger discrete GPU and a slew of sensors for positional tracking.

Oculus announced at OC3 this past year that they’re working on precisely such a device, code name “Santa Cruz.” AMD-partner Markham has already showcased this capability with their standalone VR headset, called the SulonQ. GameFace Labs has also demoed an Android-based contender that is also compatible with Valve’s Lighthouse tracking. Clearly, all-in-one VR headsets would be far more attractive to the broader audience seeking to jump into VR, however, this is still an emerging category with no commercially available headset… Not yet anyway.

Conclusion — AR and VR drink from the same well, know your context and choose the best device for it

In summary, understand that Augmented and Virtual Reality drink from the same well and have their share of overlaps. The methods to develop for them are largely the same. The lowest common denominator seems to be Unity3D, which has be used to build every type of experience out there. Higher end engines like CryEngine and Unreal are currently reserved for desktop/console VR, but it’s only a matter of time until they scale to other devices or compute power in devices scales to support them.

As it stands there isn’t one catch-all option that’s inherently better than the rest. Eventually we’ll get to that all-in-one Oakley form factor, but we’re clearly not there yet.

Until then, choose the best hardware platform and medium to convey the experience you want and go out and build it. Consider your use case, context and budget. See which VR/AR device is best positioned to accomplish your vision.

Sleep easy knowing that whatever you choose to build — AR, VR, MR, it’s all spatial computing and the learnings you make in one subset will undoubtedly transfer to the other.

Click the heart ❤ below to recommend this article if you liked it

Wanna get in touch? Hit me up on Instagram or Twitter. Send over your VR/AR questions! Stop by our YouTube channel & Facebook page for slick VR content.

--

--

AI Creator & Ex-Google Maps & AR/VR. 1.4M+ subs & 360M+ views. Tech, art & product. Angel investor. TED speaker. ੴ. 🔗 http://beacons.ai/billyfx