Low Poly Valley by rmartone

Realism, Expectations, and the Fidelity Contract in VR Design

Adrienne Hunter
Virtual Reality Pop
8 min readMay 25, 2017

--

Virtual reality is unlike any other medium that designers work with. For decades, we’ve been creating software experiences for external devices with finite screen sizes and strict input modalities. Keyboards, touchscreens, and mice translate our physical actions into interactions on a static display. Move a mouse pointer across the screen, click to press a simulated button.

Presenting a sense of realism in our software has largely been the domain of modern-day interface design patterns like skeuomorphism and material design. But in VR, we have a screen with no edges, a wide variety of input devices, and a limitless virtual interface. What does it mean for VR to have a sense of realism? How is it that people can be convinced that VR is real in the first place, and what does it mean for the user experience?

Real reality, simulated reality

Much like in real life, everything you can see in VR is assumed to be real by your users, and therefore interactable, able to be experienced somehow… but why is that? This strong desire to assume realism is a neurological impulse: our brains have spent their entire lives being trained to accept reality as a 3D environment we can interact with (which is essentially what a VR experience is), and learn the rules that our present reality is governed by. Simulated realism could arguably be the overall goal of VR, it’s even named appropriately: “virtual reality”.

There are all kinds of assumptions that our brains have about reality that we bring into VR with us: shadows show us which directions are up and down, hollow objects are capable of holding liquids, bigger usually means heavier, and so on. Our brains make assumptions about whether we can interact with things based off how they look and what we know about objects that look like them. If something is small, I can probably pick it up. If I see a lamp, I already know that I wouldn’t need to hit it very hard to knock it over, based on everything I know about lamps (and the experiences I’ve already had accidentally knocking them over). All of these expectations our brains have about physical spaces & interactions are based off an entire life full of experiences working within the laws of real reality.

What happens when we put someone in VR? They invoke expectations based off what they see, and they start making assumptions about how the new world around them works. The ground is flat and immobile beneath us, a switch on the floor lamp can be pressed and likely turns the light on and off, the string I see on this bottle rocket can be lit in order to launch it into the sky. In VR, the environment is the interface, and the interface is the environment. The line between the two is a matter of design.

In VR, the environment is the interface, and the interface is the environment.

Breaking expectations and the fidelity contract

Let’s say I am in VR, standing at the shoreline of a very realistic-looking lake, and I spot a small rock on the ground that looks perfect for skipping. When I reach down to pick up that rock, I find out that it’s actually just a lifeless part of the ground’s 3D geometry, and I can’t pick it up, let alone interact with it at all. I begin to wonder, can I pick anything up off the ground at all? I try to pick up a few more rocks, and even though they might visually look like I can pick them up — everything I already know about rocks that look like this tells me I should be able to pick them up — I can’t interact with them. As I try again and fail again, I’m slowly learning that rocks can’t be picked up. Eventually, after repeated failed attempts, I’ll decide that the same rule applies to every rock that I see, and I will stop trying to pick rocks up completely.

This is an example of what happens when a user’s mental model (their expectations and assumptions about the world) is applied to a virtual environment that breaks and then reshapes their expectations. Preexisting beliefs about rocks are challenged by this new world where rocks can’t be picked up, and so we adjust our understanding of what’s possible in that virtual space as we learn how this particular world works. If the world acts consistently, we will quickly learn what’s possible and not possible through experimentation.

Consistency in world building is one of the biggest ways we can maintain plausibility that virtual spaces are real. What happens if we turn just one of the many rocks on the ground into an object that I can pick up and throw? This is breaking the rule I learned earlier about how rocks work in this VR experience, an example of what’s called “breaking the fidelity contract,” which is a concept that Dr. Kimberly Voll has introduced previously and written very thoughtfully about in terms of VR games.

I consider the fidelity contract to be one of the design heuristics of VR. Per Dr. Voll, the fidelity contract states that the design of a VR experience should maintain “perceived consistency in how that virtual world works.” In other words, all rocks should behave the same: either you can pick them up, or you can’t. The fidelity contract can apply to any/every aspect of the virtual environment, directly impacting cognitive load.

An inconsistent rule like our rock example puts cognitive load on users who are trying to mentally model the virtual world around them. From my perspective, since I came into the lake experience with pre-existing expectations about reality — and you’re breaking them! — the least you can do is break them consistently so I can build a reliable new mental model that helps me start making educated guesses about how your VR experience works.

If you come from web or mobile design, you’re already familiar with this phenomenon: if all of the links on your entire site are blue with no underline, and then you make the H1 of one page blue, some users will try to click on it thinking it’s a link to somewhere important. Being able to make mental models and then rely on them is the backbone of product usability. Don’t make me think hard about whether I can pick something up, it creates unnecessary cognitive load that can lead to irritation at best, or quickly moving on to the next VR experience at worst. (I’ve written about specific tactics for managing cognitive load in VR previously.)

Presence, place illusion, and plausibility illusion

In the field of VR design, there’s a lot of chatter about VR giving the user a sense of presence. If you ask five VR designers what presence is, you might get six answers back, but for the sake of clarity: presence is the feeling of being in a place depicted by a virtual environment, even though we know we’re not actually there. Feeling presence is what happens when users buy into what Mel Slater calls the place illusion in a 2009 paper exploring why users treat VR environments as if they were real. It’s accepting the illusion of “being there,” in as many ways as our senses can convey a feeling of being in a virtual place.

Because the place illusion relies on sensory stimulation in order to be accepted, it’s constrained by our VR hardware and the sensorimotor input/output modalities it provides: visual, audio, haptic, and tracking. The very design of your VR environments impacts presence — the way it looks and sounds, how parts of it feel in your user’s hands, and the sensation you provide of physically moving through 3D space when you use tracked VR hardware.

Determining whether users in your VR experience are feeling presence is tricky, and Slater notes that the feeling of presence in VR is an example of qualia — subjective sensory experiences that can’t be directly measured but we can still make an indirect assessment of presence through research by using questionnaires or observing test users’ “physiological and behavioral responses.”

Along with place illusion, we have the plausibility illusion, which is “determined by the extent to which the system can produce events that directly relate to the participant, the overall credibility of the scenario being depicted in comparison with expectations.” In other words, Slater is saying that the plausibility of the events happening in the virtual world we’re in is directly tied to the credibility as determined by our expectations. If we’re using a Samsung Gear VR and load up a VR experience that forces us to use gaze-and-tap as an abstraction of picking objects up, it’s a far cry to say that this is a plausibly realistic interaction. (I might argue whether that experience is even VR at all, but that’s a debate best saved for two beers into a VR conference happy hour.)

Slater argues that users who accept these two illusions will have a “realistic” experience, even when the environment is cartoony and we’re allowed to teleport from one room to another by walking through portals. Though we may not follow the rules of reality perfectly — gravity is turned down, every surface ripples like water, objects bounce off each other at ridiculous speed — as long as we’re consistently following our own rules, plausibility can be attained, and users will respond realistically (and maybe even predictably) to the world we invented. This makes a strong argument for adhering to the fidelity contract, because of the effects it can have on the way users experience your virtual environments.

Building better VR user experiences by consistently meeting expectations

The fidelity contract not only helps create plausibility, but also improves the usability of our VR products by making them feel more realistic. We reduce cognitive load by applying rules consistently, which means our users…

  1. Have more mental capacity to think about the decisions they have to make while inside our experiences, and
  2. They are less likely to get frustrated and quit, or feel upset and come away with a negative experience.

Adhering to the fidelity contract means taking our user’s incoming expectations into consideration in the design process, so that we give our VR experience the best possible chance of plausibility — whether that means adhering to reality’s rules by default, or by consistently following the new rules we teach our users as they interact with our virtual world.

In the course of talking about plausibility and virtual environments feeling realistic, we’ve also unintentionally begun making a case for incorporating more physical and behavioral realism into our VR experiences in general, on the premise that we’re meeting user expectations about how virtual environments work. Since those expectations are being fed by past experiences and mental models formed in real reality, it makes sense to pursue a certain level of realism if it means increased plausibility, realistic-feeling interactions, and better usability.

But how exactly do we create VR experiences that incorporate realism? For one, we’re not holding controllers in real life, we’ve got hands with tons of fingers that do all kinds of things like grip and point and scratch. Should we give our users virtual hands holding virtual controllers? Or can we just show them hands and assume they’ll be able to figure it out?

It can seem impossibly complicated to design VR interactions given such odd constraints like input devices, real-life environment limitations such as walls or chairs, and the limitless possibilities of a virtual place that can be anything at all. We might end up sticking to conservative design decisions and copying interactions from other VR experiences. There may or may not be a laser pointer and a lot of floating 2D panels on a grid. Maybe you end some nights staring at the ceiling and wondering if you should have stuck to designing websites.

It all rolls up to one big, scary question: how do we design interfaces that are native to VR, and also meet user’s expectations about how VR works?

Next time, we’ll tackle this question, and hopefully take a few steps toward answering it.

--

--