So what is motion and how should we characterize it? For the purposes of this blog, we refer to motion in the context of applications on our personal entertainment devices. These are ones you use on your TV, game console, smart phone, and PC. Of course, motion also extends into health, fitness and industrial products and we may spend a little time on these too.
Before we define motion, let’s dispel one myth which is that motion means gestures. Today, gestures are all the rage; you can’t read articles about the future of UIs without the seeing the word gesture. And many consumer electronics marketers now use the word gesture to broadly describe all types of motions. As if humans were just one big gesture making machine.
Nonsense! Sure, waving hello with your hand is a gesture. But swinging a baseball bat is not a gesture nor is making a menu selection to rent a movie. (By the way, we place sole blame on Tom Cruise for this problem.)
So how should we categorize motion? From our perspective there are really four categories of motion: (1) natural motion, (2) pointing, (3) gestures, and (4) virtual controls.
The broadest of these categories is what we call natural motion. When we walk into a room or jump in the air we are invoking natural motions; these motions are inherent in the human structure and have been learned in conjunction with everyday activities. Swinging a baseball bat or golf club is also natural motion. (Although don’t look at my golf swing; it’s hardly natural. But that’s my problem.)
The other characteristic that differentiates natural motion from the others is the application and motion tracking system must be sensitive enough to distinguish different performance levels. So if we are designing a golf game, there should be some performance difference in how Tiger Woods or Rory McElroy swing a golf club compared to how I swing one so that the game is meaningful. So with natural motion, we need to detect and highlight the differences while with the other motions, we need to tolerate and smooth the differences.
A second type of motion is pointing. Now pointing is also a natural motion but it has very specific characteristics which is why we treat it separately. People learn to point before they learn to speak and infants and toddlers use pointing as a fundamental means of communicating very early in life. If you don’t believe us, ask a 1 year old child “Where’s Mommy?” or “Where’s Daddy?” and see what they do. (Just make sure you know the child or Mommy and Daddy might get upset.)
It’s also not a coincidence that we call the cursor on your PC a “pointer”. Most computer interfaces today are pointing-based, whether via a mouse or through touch control. One of the most important characteristics of pointing and cursor control is that it requires a high level of performance or it leads to frustration. Pointing must have very low latency and a high degree of accuracy to be effective. Imagine if typing in text on the touchscreen of your Smart Phone wasn’t very accurate and how frustrated you would feel. Ok, that was a bad example. No one likes typing text on their smart phone for exactly this reason!
The third category of motion is gestures. A gesture is a shortcut; in essence it is a verb used to drive a specific action in a user interface. A good example is the “pinch” to zoom operation on your phone or tablet. Gestures are like sign language and it has been shown in academia that the number of gestures or shortcuts should be limited to something close what our short term memory can retain. The iPhone, for example, really only requires 7 gestures – four directional swipe operations to scroll content as well as pinch in, pinch out, and double-tap to control the zoom level. That’s it. Enough said!
The final category of motion is virtual controls. Virtual controls are a hybrid type of motion that is both in one sense natural and in another sense gesture-like, but again has unique characteristics. Traditionally, hardware electronics products have been controlled with knobs, dials, and sliders. We can enable more functional skeuomorphic interfaces by tracking motions corresponding to virtual controls. The easiest way to describe this is imagining how a scroll wheel can be used to adjust a volume slider up and down. Now instead of using a scroll wheel, use an air mouse and move your wrist up and down to move the slider. For this to be effective, the control needs to smoothly track motion and enable a one-to-one tracking of your hand and the UI element on the screen.
So to summarize, we distinguish four types of motions:
- Natural motion: Must detect individual performance levels to be effective
- Pointing: Very high degree of performance required or leads to frustration
- Gestures: Simple small number of control mechanisms or shortcuts
- Virtual controls: Enables skeuomorphic interfaces and must track smoothly
And the next time you hear someone confuse the terms motion and gesture think to yourself “Not so fast my friend”. Motion is more than just gestures. And motion sensing software needs to be sophisticated enough to distinguish the differences. Or I’ll be joining the PGA tour.