Image credits: Dribbble

A UX Designer’s Foray into Virtual Reality

As a designer for digital products, I’m often asked, “So what are you working on these days?” I’ll reply with an overview of whatever web or mobile app I’m designing for at that moment — easy enough to describe because, hey, we all use the web and devices loaded with apps. My response has become provocative and, in my opinion, more interesting, since I started designing for virtual reality. It’s greeted by everything from blank stares, to over the top enthusiasm about the possibilities, to genuine concern for the societal implications that this technology is likely to bring about. Regardless of my own ambivalence about virtual, augmented and mixed reality, and their unfolding impacts on society, technology and the way we consume media, the allure of its potential is hard to deny. I’m eager to learn what new experiences it’ll introduce, and how those will inevitably transform the digital design landscape.

First, a bit more about my background. I’m a product designer and self-described user experience generalist, having worn the hats of information architect and experience, interaction, and user interface designer. Currently, I’m working at major motion picture production company as a Senior User Experience Designer, a role that has me leading everything from discovery and definition to design and testing. In my time here I’ve worked on a variety of both enterprise and consumer-facing web and mobile applications.

Last summer I transitioned into the role of lead experience designer for a loosely defined entertainment-focused VR application. I didn’t feel that I was particularly qualified, considering my lack of experience, knowledge, or even enthusiasm in this new medium. A while back my interest was piqued, though, by the epic failure that was Google Glass, and the uproar it caused when a beta-tester made the mistake of sporting a pair in the seediest of bars in San Francisco’s Lower Haight, my neighborhood at the time, and inciting a heated exchange that went viral.

Remember the Google Glass?

The prospect of augmenting a digital display onto the real world was fascinating to me. I recalled this latent intrigue and figured this was a great opportunity. Besides, I rarely regret stepping out of my comfort zone to confront new challenges and in this case I might actually learn some new tricks.

This is about my foray into designing for a new dimension, virtual reality. I imagine I’m not the only designer out there dipping a toe in VR. I hope that by sharing some of my own insights from my ongoing experience, I’m able to help guide, support — or at the very least, amuse — fellow designers who are thinking about taking the plunge.

The UI Paradigm Shift

I’ve long been accustomed to navigating, as both designer and consumer, the flat rectangular windows that are our beloved screens.

It’s amazing how efficient we’ve become at managing so many processes, tasks, and applications across multiple windows and tabs in such a finite real estate. We’re able to do this because our interfaces use a familiar layering metaphor that simulates depth, expanding the canvas and allowing users to establish a visual spatial system to help them understand how things are arranged. This method isn’t foolproof, however, so other cues like size, contrast, color, and rank are used as aids to denote hierarchy. As savvy as we’ve become, screens are limiting in terms of spatial organization systems. This became clear when I started to think hard about the differences between the interfaces of today and those of our near future.

A New Dimension

VR adds the z-axis and introduces a third dimension, tremendously altering the UI as we know it. Screen edges that once restricted us no longer exist; instead we’re afforded limitless space and depth in every direction. In VR, we don’t need to borrow familiar interface metaphors because we’re able to tap into the authenticity of the physical space, providing a seemingly infinite surface upon which to arrange objects. By utilizing the space and depth provided, users should instinctively grasp the organization of things in space because they’re able to map their visual spatial systems to their real ones. Taking this a step further, we can more accurately anticipate how people will engage with this new UI because humans are highly attuned to interacting with their surroundings from thousands of years of training. We’ve developed a series of triggered responses that inform the way we connect with our world. As a result, people share a reliable intuition towards natural interaction. This means that when dreaming up ways to present a UI to users in VR, we can lean on the fundamental aspects of human perception and cognition to guide our design thinking. The key is to design interfaces and interaction patterns that are so intuitive that users can correctly guess.

The Evolution of User Context

Another major shift that revealed itself in my early days is user context. I think of the way we use our devices now as selective involvement. We’re able to toggle between applications, windows or tabs with ease. We can look up and away from our screens anytime. We’re easily distracted by even the most minor distraction, like a text, email or nearby conversation, no matter how focused we are with the task at hand, and regardless of the device we’re using. We have complete control over our experiences, and choosing how and when to use our tech. VR is a whole new ballgame. The user is fully immersed in the virtual world: their stream of consciousness now lives inside the headset and the physical world fades away, no longer providing that safety net of escape. The audience is by definition captive, lacking the flexibility to step away even for a second. Roles are reversed and the experience is in command now. Understanding this change in context is key when thinking about how our users are experiencing our content.

Figuring It Out as We Go

My understanding of these fundamental aspects of VR helped, but I still felt ill-equipped as a designer in this space. Not surprisingly, I faced challenges and learned a great deal in the process. I spent ample time seeking guiding principles for interface and interaction design for VR, only to discover that, aside from the occasional brief write-up from a pioneering content-contributor, headset manufacturer, or technology company entering the space (Google, Oculus, Magic Leap), there wasn’t much out there. I decided that up to this point I’d had it made: I’d been able to lean on established, tried and true design paradigms to help guide my approach and address any problems I encountered. These don’t exist for VR. While the technology has advanced rapidly in recent years, there’s clearly a lot of work to be done to design and deliver great experiences in this medium.


In these early days of VR, interaction methods are varied and often unpredictable. For screens, we’re able to confidently predict the input modalities in use (i.e., keyboard, mouse, finger) depending on device type or screen size. VR introduces the potential for input ambiguity. There are headset viewers designed for mobile phones (like the Google Cardboard) but these come in various shapes, sizes, and feature sets, like input buttons and head straps. On the higher end you’ve got tethered headsets like the Oculus Rift and HTC Vive. The Rift ships with an Xbox One game controller, the Oculus Remote and, for an additional charge, Touch controllers that use hand-tracking. The Vive comes with input wands, also using hand-tracking — desirable because they enable gestural interaction, but we can’t be sure the user will always have them. The breadth of the input spectrum makes it difficult to conceptualize your app’s interaction scheme, especially if you’re not targeting a single platform, since different inputs will mean different design decisions.

Gaze-based interaction

You can choose to design for the lowest common denominator and plan for a strictly gaze-based interaction scheme, or you can choose to restrict functionality to users with more advanced input devices. What if you want to design your app to be inclusive across platforms but to take advantage of more advanced input modalities when available? Also, wearing a headset can create problems when users, used to looking down to see their input device, can no longer do so. Designing with this in mind will require planning and will inevitably increase scope.

With goggles on, users are blind to their physical world

User Comfort

Of course, we always want to ensure text is legible, contrast is adequate, and so forth, but VR introduces new challenges with regard to physiological and environmental considerations. Users will invariably expect their virtual worlds to act much in the same ways as their actual worlds. A mismatch between the user’s anticipated event and what actually happens may result in motion or simulator sickness. For example: if you’re on a virtual roller coaster and you accelerate quickly, your brain is preparing your body to feel that increased speed but your body never feels it. Users can look forward to dizziness and nausea so make sure your trash can’s accessible in your user testing sessions.

Anxieties, Phobias and Perceptive Threats

People may experience distress from their surroundings, such as small or large open spaces, heights, brightness or darkness; perceived threats like scary creatures or sharp objects may also be unsettling. Relative scale is another factor that can impact the a user’s comfort. The larger one is, the more powerful they may feel, and vice versa. For the first time ever, we designers have complete control over our user’s surroundings, so it’s paramount that we think carefully about how we’re presenting them to a varied user base.


As we direct users to physically move their bodies, ergonomics also became another factor to consider. We’re already familiar with designing accessible, reasonably comfortable gestures; VR takes this one step further. To take advantage of the limitless space available, users will be looking all around themselves and, depending on the type of hardware, even exploring on foot. We can’t predict a user’s surroundings, or whether they’ll be seated or standing. If seated, their chair may allow them to swivel or it could be fixed. Here we’re faced with the potential for neck strain or fatigue from standing, gesturing, or holding a gaze in an unusual direction. We need to reduce discomfort by predicting what might cause it within our experiences.

Graphics I prepared from my prelimary research on standard field of view for VR

Adapting the Design Process

Perhaps one of the greatest challenges I’ve faced thus far has been actually doing the work and then presenting it in a meaningful way to my team of artists, stakeholders and engineers. My usual workflow consists of whiteboarding or sketching with pen and paper, then translating concepts into Sketch or a prototyping tool like Axure, InVision or Principle. Sketching is, and will always be, a crucial part of any design process because it allows for rapid interaction and fuels creativity, particularly in a collaborative setting. Moving into Sketch or a similar design application is the logical next step to flesh out design ideas and apply styling. Prototyping brings them to life. There’s no love lost for these steps but VR requires additional effort.

Designing in — and for — Context

The problem is, when designing in 2D for VR, there’s a contextual misalignment between the design setting and how it’s actually going to be consumed. Also, now that there’s depth, we’re tasked with determining properties for new variables like distance and sizing relative to the user’s field of view. Portions of the UI may appear distorted or blurry at different distances, posing rendering issues for important components like text. It’s imperative that we find a way to get our designs into a headset so that we’re able to see them in the appropriate environment but currently there’s no good way to do that. I’ve been lucky to work with some talented engineers who hacked together workarounds, none of which I would have been able to figure out on my own. The only method that has any real appeal or effectiveness involves placing our 2D-rendered visual assets into a PSB template (basically a large PSD) mimicking a 360° view, then dropping this into an internally developed Oculus app, in which you can see your comps in a mock VR environment. This worked fairly well, especially in determining sizing, but still felt like little more than an inventive hack. Without the help of others I’d have been up a tree.

A Photoshop PSB template we used as a workaround

The way we present familiar interface components has also changed. I’ve found that a UI comprising a series of flat panels works due to its familiarity, but doesn’t feel quite right in a virtual reality setting. 3D objects will most likely populate the surrounding space so paper-thin panels feel out of place. Even when concave so all parts are equidistant from the user’s point of view, it nevertheless creates a level of contrast between the panel and the rest of the world that feels unnatural.

Flat panels in space may not be optimal in 3D

Since interface elements in VR ought to feel natural and as though they’re truly a part of the environment, we should strive to add geometry and depth to our interfaces. But how to do this without any experience with 3D modeling? If you’re like me, you, too, lack the tools to efficiently and effectively design the ideal interface and associated interactions.

New (to Me) Software

I first looked into 3D modeling and animation software applications hoping to find a simple solution that I could pick up quickly. I found that while there are several robust applications available, the learning curve is pretty steep for someone without a background in 3D. I spent some time learning the relatively lightweight tool SketchUp and, despite some success, my workflow ultimately couldn’t keep up with my deliverables schedule. Other popular applications, like Maya and Cinema4D, are more comprehensive but too complex if you’re new to the game. Unity and Unreal Engine are where all the magic happens. These allow you to design directly into your headset, but now you’re really getting into the weeds. Realistically, you’ll need to dedicate a substantial amount of time outside your workday to get comfortable in any of these tools. Since I haven’t had that luxury lately I stuck to a familiar workflow augmented by the aforementioned workarounds.

However you structure your workflow, I recommend trying and testing as early and as often as possible. Expose yourself to as many immersive experiences as you can in order to get the appropriate context. We have a saying on our team: “In order to get your head around VR, you need to get VR around your head.” This applies especially when evaluating designs.

Looking Ahead

In order to adapt to the changing environment and successfully create 3D products, the design team setup must expand to involve other specializations. Without question, we’ll need to include others from varied backgrounds in art, gaming and cinema, to name a few. Concept artists will be called upon to set the stage. 3D graphics designers and animators will be tasked with creating and bringing to life the necessary assets. A new breed of engineers will be asked to step up and build these experiences. Establishment of these new partnerships will be a vital component in rolling out digital products.

That we’ll need to expand our skill sets and learn new tricks in order to thrive shouldn’t be a foreign concept to us designers, provided the changes that technology’s rapid evolution and the emergence of newer, faster and more varied devices, usage contexts and design trends have already posed to us. However, this next shift is of a greater magnitude and into a space that may be less comfortable. It’s a willing leap into the unknown that may be frightening, but if this isn’t exciting, I don’t know what is. And, on the bright side, we’re all in this together.

The Future is Bright

I recommend going at it with an open mind. Virtual reality is still a young and immature medium for which design paradigms aren’t yet established. Designers are trying to figure out what does and doesn’t work. There are no clear standards so I would encourage getting creative and owning this freedom to try new things without penalty. Be a pioneer. Besides, anything is possible in this domain.

There’s a colossal amount of learning that will need to take place. This isn’t to say that all the work we’ve done up to this point is wasted. In fact, I think designers are very well positioned to confidently stride forward. Through our advancements in the digital landscape over the last 40-odd years in understanding human-computer interaction and the intricacies of the graphical user interface we’ve learned a whole lot about abstract interaction methods between humans and digital devices. We’re going to be able to take our learnings with us into this brave new world and improve upon them. Our users have hardwired expectations of how interface elements work based on their experience in 2D, so it would be foolish not to leverage familiar design paradigms and interaction patterns at least for now. Our next step is figuring out how to cultivate the sense of presence and immersion to usher in new and exciting interaction and interface methods. In the end, the mission is the same for all of us — deliver delightful and meaningful experiences through quality design. I look forward to what’s in store for all of us.

One clap, two clap, three clap, forty?

By clapping more or less, you can signal to us which stories really stand out.