Integrating Gestures Into Mobile Applications
In the often referenced Minority Report, Tom Cruise flips though crime files comprised of complex data sets and videos in a fully gesture supported immersive computing experience. This is duplicated in Marvel's Iron Man movies with Tony Stark flipping through 3-D models with mere gestures and voice control.
A more real-time example of the power of gesture-based interfaces could be found first in the explosive use of the Wii, and today the burgeoning popularity of the Kinect interface for the X-Box. So why now the focus on gestures as a methodology of interacting with everything from smart phones, tablets, video games, and all manners of industrial computing?
As the computing environments become more complex, interaction mediums are becoming more difficult to translate into keystrokes or button push combinations; gestures can bridge this gap. Just think about the countless videos of babies on the Internet interacting with iPads and iPhones before they can even talk, showing the simplicity and universal understanding of gestures as an interaction medium. However, while gestures are very easy to consume as a part of the application interface, they are extremely difficult to integrate into applications. As the device screen and size of the interaction space evolves, gestures will need to transform. Apple demonstrated this successfully by adding three-finger and four-finger swipe gestures into their operating system.
This all points to specified requirements for:
- Screen sizes
- Operating systems
For example, a 7-inch display tablet is primarily held in one hand. This means that the non-dominant hand is holding the tablet and almost never involves the gestures, and the dominant hand is often sweeping across the body. Rarely would the user go right to left with the right hand. However the larger footprint display tablet (such as 10 or 12-inch) in both hands are often times involved in the interaction as the device lies on a table, tray, or lap. Lastly, operating systems have their very own gestures to signify different actions to be taken such as pinch and zoom, double tap or expand. Overall we expect the gestures will increasingly be used in all manners of mobile applications and computing interfaces. Trying to enforce a one-size-fits-all methodology of gesture interfaces will lead to frustration for the application user.
If you are in your early days of mobile application exploration, or just beginning to build out mobile apps for your customers and employees, look towards gestures to provide more immersive interfaces. Start out slowly and understand how the differences manifest themselves across different devices, device classes, and operating systems. Find applications that utilize gestures well and dig deep on those interfaces. Finally, as gestures move outside of touchscreens, look to optimize the learning from those touch screens in additional computing paradigms.
Michael King directs Appcelerator’s product strategy, in addition to providing strategic client guidance, sales support, partner enablement, as well as market research/analysis and product evangelism. An IT industry veteran, Mike spent the past 11 years at Gartner, most recently as research director, where he managed all U.S.-based wireless data research, specializing in mobile enterprise strategy. Notably, he authored Gartner's Magic Quadrants for MCAP (Mobile Consumer Application Platforms) and MEAP (Mobile Enterprise Application Platforms) for the past five years. Prior to Gartner, Mike held marketing and research positions at Neopoint and META Group. He also served as a sales engineer at A-Com.