The next generation of computing is coming in many new forms. From easier to use interfaces on our classic point and click desktop and laptop machines, to all sort of new multitouch experiences: HP’s upcoming multitouch screen, Apple’s pending multitouch interfaces, Microsoft’s Surface platform, Perceptive pixel’s multitouch screen and others. This next generation of technology is right around the corner, with the adoption of the iPhone being the first widespread use of a multitouch device.
This post is an introduction to my study of multitouch. What I’m most concerned about here is in what sort of cool and productive experiences can be created. Can we completely scrap the old ways of point and click? Can we use a effectively use multitouch interface in new innovative ways to increase productivity?
To break things down a little bit, I’m going focus my study into three different areas that I believe are crucial to understanding the development of a new experience. First, there’s the the actual interaction - the actual movements we’re creating to invoke change in the system. Second, there’s the data organization – since computers are tightly integrated with the storage of data, what sort of experiences can we create around accessing and storing our data. Third, tying everything together, is the application of this new functionality to different hardware and software platforms - I believe that this third piece kind of mellows out crazy interactions by asking the questions, “is it possible?”, “is it practical?” and “is it portable?”.
For the most part, the multitouch experiences that people see today are trying to mimic the physical world, continuing the analogy that items on the computer are similar to items in the physical world. For example, there are flick gestures on lists to make a list scroll with a velocity and inertia, and drag coefficient. There’s also image resizing as if we were stretching an image (grabbing two pieces of an image and dragging apart to stretch). And then there is the typical click relating to the pressing of a button or the selection of an item. These are all intuitive ideas that are carried over from the physical world to the computer.
Lots of companies and people focus on how we can make computer interactions more “realistic” in mimicking the physical world. This is a fine topic to focus on, but for the sake of research and development, I would like to pose another question: should we be trying to mimic the physical world at all?
I’m not sure that mimicking the physical world is the fastest way to accomplish a task. If in all cases the physical world equivalent was the fastest way to accomplish a task we wouldn’t have computers. While there is a lot of gray area in what’s faster to get done on the computer and faster to get done in the physical world, this statement brings up a valid argument. I would like to focus on exploring new experiences and interactions, adopting new interactions that don’t mimic the real world but introduce a whole new way to interact with non-physical objects. Inventing a new way to interact with non-physical objects would open an infinitely large box of possibilities.
The way our data is organized can make or break our productivity. On a multitouch system, the range of possible experiences provides multiple environments for different types of data organization. Most of todays innovative data organization platforms function around making data organization more organic. The jist of the typical organic approach is usually that you could place your data anywhere in a 3D space and navigate through it using zoom-in and zoom-out type interactions using multitouch.
With organic types of data organization I can see data placement beng very personalized – analagous to one person having a messy room (data all over the place, but magically, you remember where you put it) and another person having a very clean room (stacks and rows of data – maybe even put it in some type of virtual storage container, comparable to a shelf).
The application of these two fundamentals lands in a box of multiple interests. There are two main interests: hardware and software.
Hardware could range from small tablets and hand held devices like an iPhone, to large-scale devices that would involve a projector type display. Developing an interaction that suits all types of multitouch devices is crucial. Just as the mouse and keyboard are fundamental input devices, so too should a new type of multitouch input provide a type of fundamental input.
Software would range from the algorithms that would be need to control and track multitouch interactions to the functionality that a specific interaction provides.
I think that the online community is just starting to scratch the surface of multitouch interaction. There are many research firms that study multitouch behind closed doors, but rarely do I find people talking about it. I’d like to get that ball rolling in this perfect environment for collaboration: the internet. I’ll post new ideas as I think of them, and I’ll try and explain them in the confines of the three areas that I introduced in this post.