The principles we use to create effective VR and AR interactions have existed for as long as we have walked this earth.
When the Internet was new, UX Designers were still a thing to come.
Those were the days when the Internet was just becoming a thing, and we had no idea how new interactions and challenges would shape our careers. Today, we can check our thoughts against a dozen design systems that have already done all the heavy lifting for us; we know what works and what doesn’t. As we were simmering in certainty that we had conquered web usability, VR came along, and those of us who had gotten good at thinking in terms of screens saw a perfect summer storm forming. How could I train myself in something so different from what I had been doing for the last dozen years?
Luckily, these fears were unfounded. VR does present some exciting questions, but when it comes to principles on interaction we already have most of the answers. Some of them come from print, but most are actually based on the way human beings perceive and interact with the world.
Introducing: Virtual Reality Patterns
What you will find here is a compilation of some of the basic perception principles that guide both our physical world and VR + AR experience design. I’ll be mostly focusing on the ones that are specific to VR as opposed to visual design in general — so you won’t see much detail going into Gestalt laws of proximity, similarity, continuation, common fate, etc.
If you look at a flat surface (these words on the very screen you are facing, for example,) the elements placed right in front of your eyes will be more accessible than the ones on the edges or sides.
Large flat surfaces that show distributed elements across them do not offer them all in an equal way. A screen is a small area, but imagine a wall with a person standing 1 meter away from it. How visible will the paintings 2 meters to each side be?
Designing for a flat surface is very different from designing for a spherical world that also happens to have the user at its centre. You can circumvent the obstacle of elements being out of good reach by using, instead of a flat surface, a curved one.
When the information is ‘wrapped’ around the user, they can easily navigate it without even having to take a single step.
The art of cluttering and skeumorphism in Thor synthesizer
Skeumorphism is the idea of designing virtual things that mimick their real world counterparts, enabling the user to leverage their experiences with the real world. My favorite example of the save icon, which was skeumorphic until we abandoned diskettes (the trashcan icon, on the other hand, prevails to this day.)
Skeuomorphism has fallen out of fashion in web and app design due to growing flat design trends, but its usefulness should not be discarded: It helped an entire generation learn how to navigate the digital era.
There is no need for a VR app to do things hyper-realistically. Skeuomorphism can be very useful, but it can also be taken too far. Instead, it’s more useful to think in terms of cues, rather than photo-realism:
A given design does not need to include every possible cue, but the cues that are used must be concordant with one another — they need to work in harmony. Beau Cronin, Speculations on neuro-motivated design for VR
In short: Use skeuomorphism when appropriate, but keep your options open.
This time, try slowly bringing your index one towards your nose. Keep your focus on it so it stays sharp. Now move it closer and closer. As soon as you get to (this varies from person to person, but mine is around) 10 cms, you will no longer be able to focus on it.
The same basic principle applies to VR. If you place objects too close or too far from the user, they won’t be sharp enough for her to interact with them.
Depth perception is a combination of multiple factors, but it helps when we have things to compare against each other. The clue is to find a balance between close enough to be seen and far enough to be able to focus on it, with enough cues to give a proper sense of location.
The most comfortable range of depths for a user to look at in the Rift is between 0.75 and 3.5 meters (1 unit in Unity = 1 meter). Oculus Developers
Motion flow sounds stricter than it actually is. It’s about not creating experiences that wouldn’t happen in real life. If you do, you might be throwing the user into a motion sickness and nausea party (and possibly ruin their future enjoyment of VR, as lots of people do not give it a second try after a sickly experience.)
The camera in VR is always mapped to the player’s head. It should always stay there.
There are many ways of grabbing attention that do not require camera changes, and they are all supported by our evolutionary history. One especially: We are extremely good at noticing movement (I dare you to stare at grass and not freak out at all the movement you will notice if you truly focus on it.) This magnificent perception ability was born as a survival mechanism, and is one of our strongest allies when it comes to designing interaction. Ask micro-animations.
So, instead of forcing the camera out of a person’s brain and scarring them for life, guide the person using other clues like movement and light. As a species we are also quite fascinated with the second, combine the two and you will be able to guide the interaction extra-effectively.
Our eyes are marvelous organs that have evolved to actually focus only on a tiny area at a time. If you are willing, do the following: Extend your arm and look at your thumbnail. That is approximately the area that is focused or sharp in your field of vision (no kidding.)
These more or less two degrees your eye can ‘truly see’ are, luckily, accompanied by the ability to focus your attention on other areas in the field of vision, although they are and remain blurred.
Because of this, any meaningful controls you plan to use on your VR interface should not be placed in the periphery, but on the area right in front of the user. A 1:1 is advisable, as we are used to landscape formats on the web but they are not natural and would actually force a person to tilt their head in VR.
Other considerations for designing VR interfaces include:
Apropriate scaling of elements: Scale your interactive elements so that they match a real experience. For example, make buttons large enough for the user to interact with them without risking touching other interface elements.
Limit unintended interactions: Be sure to space out UI elements so they don’t trigger multiple elements accidentally.
Spaceship cockpit in EVE Valkyrie
A visual or anchor point is an essential part of any successful (and nausea-free) VR experience. You need something that remains stationary when everything else moves, imagine otherwise a flight simulator without a cockpit, or a shooting game without an actual weapon.
Never, ever, leave an anchor point out. Studies have shown that even a virtual nose can actually help reduce sickness. We are constantly evaluating our experiences by making comparisons. In size, in color. In space, in time.
Oculus Rift VR Headset + Headphones. Credit: Alan, CC
Sound is as important as visual input for a truly realistic experience. Without the right audio cues, the brain doesn’t fully buy into the illusion.
“There’s a little map in your brain even when you’re not seeing the objects (…) If the sound is consistent with geometry, you’ll know automatically where things are even if they’re not in your view field.” (Ramani Duraiswami, professor of computer science at the University of Maryland and co-founder of Visisonics)
There are creators playing around with ambisonics, or spherical microphones that capture a sound field in all directions, but other techniques are also highly effective. For example, rendering audio attached to objects as they — or the users- move through a setting (something almost everyone is familiar with thanks to Dolby Laboratories’ immersive cinema audio.)
Adding sound to an experience will significantly refine and enhance its interaction. If you want to learn more about VR and audio, I wholeheartly recommend this excellent Engadget article by Mona Lalwani.