Designing for the Human Body in XR

Lauren Bedal
Virtual Reality Pop
9 min readNov 16, 2017

--

By Lauren Bedal

For those of us working within design, we are seeing fundamental changes in our relationship with the digital world. One of these changes is the shift from mobile computing to spatial computing.

In the era of mobile computing, we are given extreme portability of our devices yet we are confined to a certain amount of real-estate on our screens. Because of this limited real-estate, our interactions have become primarily touch-based and therefore extremely limited when compared to the input potential of our whole body. Our gesture-based interactions are essentially limited to the way in which our fingertip taps and slides across a flat piece of glass.

Mobile to Spatial Computing

Spatial computing moves beyond the notion of a contained interface. This era of computing expands our potential for input and allows us to use the space around us as a ‘playground’ to interact with technology. Advancements in computing (AI, Machine Learning, rendering and displays) coupled with more recent interaction methods (i.e. gesture and voice) are making this possible. Our physical environment blended and embedded with technology enables our whole world to become our interface.

A Move Towards Body-centered Design

Spatial computing signifies a return to a more harmonious relationship with technology. From subconscious inputs (EEG, heart rate, galvanic skin response, blink rate, posture, body language) to fully aware and engaged inputs (hand gestures, full body gestures); we as designers are presented with a new spectrum of interaction modalities using space as our medium. With the market for gesture interaction alone projected at 18.98 Billion USD by 2022, interaction with the body is becoming more ubiquitous, cultivating a new way of interacting and relating to digital information.

So how does this impact our work as designers?

Spatial computing is first and foremost, an embodied experience. Spatial computing places the body, as an input mechanism, front and center to experience the world in an active and practical way. This new medium requires an understanding of more than pixels on a screen. For interaction designers, it demands an understanding of the following:

  • The human body as input. (Body)
  • The human body in a three-dimensional world. (Space)
  • The human body in a three-dimensional world, over time. (Time)

These new demands are particularly interesting to me considering my background. In addition to my practice as an interaction designer, I also have a background in dance and choreography. These dimensions- Body, Space, and Time are foundational to understanding and creating art with movement. If we look at other existing fields of study, designing for XR shares a strong similarities to dance/choreography considering these three dimensions.

From my background as a dancer/choreographer, I believe there is a wealth of knowledge from the field of dance to inform the interaction design practice as we transition into designing for XR. Let’s look at some main concepts from the field of dance and how they relate to spatial interaction design: Body, Shape, Space and Effort.

Most terminology below is derived from a framework called Laban Movement Analysis, developed in the early 1900s by Rudolf Laban. He created a system to analyze, describe, and document human movement. This rigorous framework was not only utilized within the field of dance, but also within businesses and factories.

Body & Space

Dancers have a keen awareness of their physical bodies in space. At a basic level, a dancer understands his/her own personal space, otherwise known as his/her kinesphere. A kinesphere refers to the space within reach without changing one’s place. The kinesphere has also been referred to as the gestural space, ‘body space’, or ‘work space’ in the field of human factors and ergonomics.

A great visual representation of a kinesphere is Google’s Tilt Brush. A user is given freedom to explore a full range of movement possibilities in a three-dimensional environment while offering rich visual feedback with different colors, textures, and shapes.

Google Tilt Brush

The kinesphere represents our ‘personal space’ but can be broken up more granularly into near-reach kinesphere (actions close to the body), mid-reach kinesphere (actions about an elbow’s distance away from the body), and far-reach kinesphere (reaching to the ends of the kinesphere) depending on the type of movement performed. It can also be broken up by low-level, mid-level, and high-level movement. Interactions within a users’ kinesphere include things such as full body poses, using virtual wearables, using tools, gesture prompts, gesture activities, and direct manipulation of objects.

Zones of the kinesphere
Spectrum of interactions within the kinesphere
Sitting & Standing Kinesphere Zones

For designers, the kinesphere represents a new canvas for interaction. Checking a virtual watch for notifications or using your whole body to paint in Tilt Brush hints at a new range of involvement of the body. Different contexts imply a certain degree of interactivity of the body; interaction designers can begin to understand how, when, and to what degree gestural activity can be utilized and how to communicate these new types of interactions to users.

Movement Patterning
If the kinesphere becomes our new canvas, understanding how we can manipulate our canvas through movement becomes pertinent. In addition to a firm understanding of kinesiology and body mechanics, dancers learn about the body’s foundational movement patterns, known as Bartenieff Fundamentals, or developmental movement patterning. DMP addresses how movement is organized in the body by outlining neuromuscular building blocks of movement organization. These patterns are learned by everyone, from birth through development. Examples include homo-lateral (utilizing the same side of your body) or cross lateral movement (walking and swinging arms, where opposite foot and hand are used).

Developmental Movement Patterning

Developmental movement patterns allow effective, efficient movement stemming from a place of mobility and stability. Although use of fully active gestures may not be utilized in day-to-day virtual or augmented interactions, developmental movement patterning provides an interesting perspective about implementing intuitive, gestural based interactions. We can not only work with how the body evolutionarily deals with movement, but also work to improve movement functioning and therefore well-being for all our users.

Documenting Body & Space
How can designers begin to talk about the body in a granular way? How do designers document body-based interactions? When looking at how dancers and choreographers document movement in space, frameworks have existed for decades. One such framework is called Labanotation. This was a system is effectively the ‘sheet music of dance’. Through a system of notation, Laban has given each body part a specific symbol, differentiating right and left sides of the body. Laban also created 11 different symbols to identify directionality as well as three different levels. These symbols are then combined and similar to sheet music, a sequence of movement is read bottom to top.

Symbols of Labanotation
Labanotation score

The beauty of a system such as Labanotation is that it can be as simple or as complex as needed; documenting a progression of steps or full body movement in space is all possible. As we begin to see more nuanced interactions with the body, creating a system for describing and documenting human movement will become necessary in order for designers to communicate with each other and development teams.

Shape and Effort

Shape and Effort are also two foundations within dance and choreography and main pillars within Laban’s research. Shape qualities describe how the body forms in relation to specific spatial dimensions (vertical, horizontal, and sagittal plane) of movement. These include Rising/Sinking (horizontal plane), Advancing/Retreating (sagittal plane), and Spreading/Enclosing (vertical plane).

Movements can be mapped along three planes

With the examples below, we can begin to describe and differentiate advancing interactions (such as shaking someone’s hand) vs. sinking actions (collapsing towards the horizontal plane). Shape qualities can help designers understand and describe movement as it relates to a spatial trajectory.

Shape Qualities: Left- Leap Motion, Sinking. Right- Rec Room, Advancing

Effort qualities differ from shape qualities by identifying the type of energy used in movement. Giving someone a high five vs. reaching for a cup off a shelf both rely on the same body mechanics (extension of the arm), but the way in which we perform the actions differ. Laban created a framework around these subtleties of movement with his ‘Theory of Effort’. There are four categories which contain eight (polarized) effort elements.

  • Shape (Indirect/Direct)
  • Weight (Light/Strong)
  • Effort (Sudden/Stained)
  • Flow (Free/Bound)
Laban’s Effort Graph

When effort elements are placed into sequences, it gives an effort action. Effort Actions which account for three different variables: weight, space, and time. The eight Effort Actions are punch, slash, dab, flick, press, wring, glide, and float.

Laban’s Effort Actions

Laban’s Theory of Effort provides a framework for the inner intention of movement. This covers not what the movement is but how the movement is performed. Laban’s Theory of Effort has been leveraged within academic research in the following ways:

There are tremendous opportunities for designers behind understanding the how, or emotional intent behind movement to deliver a highly personalized experience. The interplay between affective-cognitive states of a user have been notoriously hard to understand and measure, but as our computers become more emotionally intelligent, we see technologies such as face and body tracking becoming more and more ubiquitous. Recent reports predict the market for affective computing will grow to $53.98 billion by 2021.

Laban’s Theory of Effort can also be used by designers to expand gesture vocabulary. Current body-based 2D interactions primarily include touch activities such as tap, press, long press, flick, drag, and pinch. We can start to expand this list of possibilities to include not only movement actions with a definitive start and end, but also movement intentions.

Gesture lexicon can include activities and intentions

From Body, Space, Shape and Effort, the concepts discussed from Laban’s work just scratch the surface.

Main Takeaways

  • Spatial computing is first and foremost a multi-sensory experience with embodied input modalities.
  • Understanding of the body can help designers break free from 2D methods and understand new possibilities within a 3D space, creating a richer relationship between humans and computers.
  • Designing for embodied interactions is a new challenge for designers; new input methods require new principles.
  • A new body-centric lexicon will help set designers up for success by providing a new method communicate interactions.

We as designers have an incredible responsibility in this transition to spatial computing. By looking at main concepts and terminology within the field of dance, I hope we can begin to think outside the traditions of the interaction design practice.

Thank you for reading! If you’ve enjoyed this article, please give a clap or two below :)

--

--