HoloLens: Mixed reality, not mixed feelings
Simply put, it is pretty f*cking awesome!
Two HoloLens devices arrived in our Mirum New York office last week.
Sadly, soon after arriving, the devices were shipped to other Mirum offices that ordered them first.
As a self-confessed nerd*, the minute I got my hands on it I spent a solid hour playing with the device.
My reaction? “Wow!”
I remember trying the first iteration of the Oculus Rift many years ago and, at the time, I felt the definition was poor and the performance underwhelming. I also remember thinking that the Google Glass experience was a bit complex for such a small “payoff.” The Oculus Rift and Google Glass are quite different from Hololens. Oculus (or Cardboard) is Virtual Reality, meaning it “cuts you away from the world around you.” Google Glass is Augmented Reality, which adds digital information to the actual world you are in.
HoloLens is Mixed Reality, so you can see the real world around you but the virtual elements are “anchored” and their position is relative to your environment. (You can read more about these distinctions at Re:Code)
Regardless of the type of reality you’re experiencing, the commonality with these devices — and the Hololens is no different — is that you look silly wearing it (After “Glassholes”, the “AssHolo”?).
Based on the hour I spent experimenting, which some might regard as a relatively short amount of time, here are some of my personal observations of the HoloLens. Please note, this is not an in-depth review. I’m not covering all of the functionalities, only discussing a few points. If you’re looking for a proper demo, there are lots of great ones on YouTube, such as this one at E3.
The overall interaction model is very intuitive, and after a quick tutorial you’re ready to go: Use your head to point via a white dot in the middle of your field of vision, raise your finger for action, pinch to select, keep your finger together to drag, open your hand to release — super simple.
You can place objects into your environment, move them around, and since the device constantly “scans” what’s around you and recognizes walls, tables or surfaces, you can put a virtual object on a table or on the floor, turn around it, make it bigger and smaller, and move it around. (Some demo videos demonstrate how you can create and revise objects virtually or collaboratively with other people.) After about an hour playing, you feel like you’ve done it for years.
A new experience that mixes reality, virtual objects and the ability to move things around has required a new type of interface. When selecting objects and moving them around, a new paradigm comes to play: The controls (hidden by default, only appearing when selecting an object) are mapped on a “cube” that surrounds every object, and they “follow” you as you move around it. This interface behavior seems completely natural and is very intuitive.
The only real problem with the UI is that certain controls and elements of the interface are simply based on a windows legacy interface that you see on every OSX or Windows machine.
Windows are pretty straight forward “containers” for items, but implementation of the controls of these windows in a 3D environment are not that effective; not only are the controls tucked in the corner of the window and feel small, but most importantly, unlike the object behavior, they don’t “follow” you and in various instances you end up lost, needing to go around the window to find them.
Not quite intuitive, and potentially problematic when interacting with multiple windows.
Another annoying interaction is Keyboard input. It’s cumbersome and slow, so it’s good that Voice commands are available. I haven’t tried it much, but it worked when I did.
3. Augmenting the real world
I went a little crazy putting virtual objects everywhere in the space. (I kept thinking how a person using the device after me might be overwhelmed by the clutter around my desk!). We’ve seen the great Minecraft or Product design demos, but even more amazing would be if these virtual objects become available beyond a small number of devices. What if you could place something or create a virtual environment and make it available to whomever you want? Think of it like Facebook privacy settings: Only Me, Friends, Everyone, etc., and choose how people can interact based on various privileges. Walking into an office or store, you’d find “objects” or content left by others.
Through your own device, you could enter worlds created by others, stepping into a specific physical space; worlds you could modify and play with, where some content would be visible only to the people you want to make it accessible to.
In addition to content privilege settings, my guess is it would also require having a massive global hosting of all virtual objects created, mapped to all the locations they have been created in (e.g., buildings, floors, etc.). Yet it would also open up an amazing realm of possibilities for participation and serendipity for brands, institutions, or simply, and most importantly, between people — regardless of their location or relation to each other.
At a time when we have concerns about technology disconnecting us from the world and the people around us, ignoring the moment as we’re glued to our phone, or the dystopian vision of the Facebook conference with everyone (but the leader) wearing VR headsets, it’s great to see a technology that will unlock amazing new ideas, games and stories, a technology that brings magic and delight to connected experiences without disconnecting you from reality, face-to-face collaboration, or social engagement.
Oh, yes, and holographic Star Wars-style conversations too.
*Nerd Note: In 1982, I received my first computer, the Sinclair ZX81 (1K of RAM), and as you would expect, I wore a trench coat. In 1992, my Art School Thesis was actually a distopian film about virtual reality. It was terrible!