Apple wants to teach us Multi-Touch gesture language
For all the greatness of the iPhone/Multi-Touch user interface, it’s actually pretty limited.
After all, how many gestures can you use on your iPhone or Multi-touch enabled Macbook touchpad ? Tap, resize/zoom, scroll/browse, drag& drop , pinch, few more?
Well, in a patent application “Multi-touch gesture dictionary” Apple has already indicated that much more of the gestures can be made available in the future.
The new patent application from Cupertino, called “Gesture learning” gives a whole new meaning to gesture expansion. It describes how Apple may go about teaching you a whole new multi-touch gesture language, consisting of hundreds of words. Something like American Sign Language for touchscreens.
To describe the possibilities of multi-touch gesture language, Apple splits a single gesture into two phases.
The first phase of the gesture includes specific combination of fingers (or other hand parts) that you place on the touchscreen. It’s called chord.
Another phase is the movement of the fingers – rotation, translation, scaling, etc. Taken together these phases make a full multi-touch gesture.
Now, according to Apple:
Each of a user’s hands acting alone can execute twenty-five or more chords. For example, five fingers that can be independently raised or lowered give rise to thirty-one combinations. Additional chords may be distinguished by whether only the fingertips are in contact with the surface or whether the length of the finger is flattened against the surface. Further chords may be distinguished based on whether the fingertips are placed on the surface close together or spread apart. As noted above, modifier keys (e.g., the Ctrl, Alt, Shift, and Cmd keys of a keyboard) may be used to distinguish different chords. Modifier keys may also include buttons, touch-sensitive or force-sensitive areas, or other toggles located on the device.
Many chords can have at least thirteen different motions associated with them. For example, a two-finger chord (for example, the index and middle fingers) could have specific meaning or action assigned to the lateral motions that include rotation, translation, and scaling. Rotation (clockwise and counter-clockwise) of the two-finger chord gives rise to two unique meanings or actions. Translation (left, right, up, down, and four diagonals) gives rise to at least eight unique meanings or actions. Scaling (contraction or expansion) also gives rise to two meanings or actions. The vertical motion of a chord may comprise lifting the fingers of the chord off the multi-touch surface almost immediately after they had touched down, (e.g., tapping the multi-touch surface with the chord) or multiple taps, etc.
Well, you can do the math yourself. 25 or more chords with 13 or more possible movements. That’s at least 325 possible gesture combinations for a single hand. Which may already be enough to create a new multi-touch gesture language. Using both hands and more complex gesture combinations the vocabulary of such language can grow into a thousands of words.
The problem is – how we go about learning this stuff?
Apple proposes a separate interactive multi-touch gesture learning application.
Nothing too complex, just a screen area for the user for assisted experimentation with different gestures, another small animated window showing how to perform the gesture and interactive feedback mechanism, showing how well user is performing the gesture.
And to make it more fun, Apple can make a game out of the whole learning process.
The game can be something like Space Invaders or Missile Command video games, where various gesture representations descend down the screen and you have to destroy them by correctly executing the gestures. It can be Tetris , where particular block shapes correspond to particular gestures.
Or it can be even more complex games Final Fantasy, Civilizations or whatever, where:
… each character, vehicle or group of characters is assigned a particular chord, and various gesture motions performed with that particular chord direct the movements, spells, and/or attacks of a character, vehicle, or group of characters. Failure to perform the correct chord results in punishment in the form of unwanted movements, spells or actions by unintended characters, vehicles, or groups of characters. Since the instantly performed chord selects the character, vehicle, or group of characters, with practice the player will be able to switch between characters, vehicles, or groups of characters much more quickly than the traditional method of moving the mouse cursor over or directly touching the desired character, vehicle, or group of characters.
Well, I know that the whole thing sounds extremely complex. And it is.
But if the way we learned the IM/SMS texting language, and the things teens are able to do on their T9 phone keypads are any indication, there might be something here.
It would certainly be interesting to see what of this multi-touch gesture language will eventually be implemented in various gadgets and how these things will get adopted by users in the coming years.