KDE Plasma Does Gestures Globally

This is going to be a surprise to a number of people out there, but not only does the KDE Plasma desktop environment have gestures built in, but it has had them since the 3.2 (roughly) release. Gestures in KDE Plasma aren't just tied to the browser (I covered Firefox mouse gestues here), but pretty much anything in the desktop environment. With a few flicks of the mouse, you can make magic happen across your entire desktop experience. It all sounds new and exciting, but the functionality has been there for years and few people seem to know about this excellent feature. Let me tell you how it works.

Note : In this article, I am running KDE Plasma 4.9.1 on Kubuntu precise.

To see existing mouse gestures that you can use, or create your own, fire up the KDE System Settings program. To do so, click the Application Launcher (the big K in the lower left, and select it from there; it's usually in the Favorites menu, or you can find it under the Computer section (or you can just type "system settings" in the search field of the launcher). When the System Settings window appears, click "Shortcuts and Gestures" which you'll find under the "Common Appearance and Behavior" section (see Figure 1).

Figure 1 : Mouse gesture configuration is found in KDE's System Settings.

First, make sure you enable gestures by clicking that text box in the top right section, then click apply (see close-up in Figure 2).

Figure 2 : Make sure you have enabled gestures as well. On a two-button wheel mouse, button 2 is the clickable wheel.

The Shortcuts and Gestures window has a sidebar to the left that offers three sets of shortcuts. These are custom shortcuts, standard application specific keyboard shortcuts, and global keyboard shortcuts. It doesn't specifically say "gestures" here because keyboard shortcuts are one type of shortcut while mouse gestures are another. Since the selection defaults to custom, and this is where we want to be, look at the middle section where you'll see "Input Actions settings" for a handful of applications, each label representing a group of applications with one of more shortcut (or gesture) defined below. To see the various predefined gestures, click the small arrow to the left of the label (see Figure 3).

Figure 3 : Every pre-defined gesture can be viewed, or changed.

Click on any action (e.g. Home under Konqueror Gestures) and a three-tabbed pane will appear to the left of the window. The tabs are labeled Comment, Trigger, and Action. The comment is exactly what it sounds like, a description of the shortcut with as little or as much information as you want. The trigger, in this case, is a mouse gesture. Using Home as our example, the gesture trigger is a stylized "h" that starts at the light green end of the line and ends at the dark blue. Click the Action tab and you'll see that it actually translates into the Konqueror "Ctrl+Home" keyboard shortcut which loads the home page.


Firefox Gestures

Firefox is default Web browser in many Linux distributions and one of the most popular browsers around. Firefox is an excellent browser on many counts, but one if its coolest features is its ability to add features and capabilities through a system of extensions. A good thing too, at least from the perspective of this discussion. You see, gestures aren't built in to Firefox, so we need to get them elsewhere and we do that by installing an extension. Extensions are program enhancements that can dramatically change how you work with your browser. This framework of extensions makes Firefox not just a great browser, but a superior browser; extension support is another idea that has legs. But I digress . . .

To experience Firefox gestures, we're going to find a suitable add-on (or extension) for gestures -- yes, there's more than just one. Click Tools on the menu bar and select Add-ons. A new tab will appear with a list of categories running down the left side from which you can manage your extensions, change the browser's appearance, and more. To see a list of the extensions already in your system, cilck Extensions. On a fresh install, there is usually only a handful of things here. To find ourselves an extension that does gestures, click on "Get Add-ons" then type the word "gestures" in the search bar. You'll see FireGestures listed there; to find out more about the extension, click the More link which is to the right of the Install button. You'll get a defailed description about the extension you're looking to install (see Figure 1).

Figure 1: The Firefox Add-on window lets also lets you search for extensions, like FireGestures.

Click "Install now" to finish the installation. And no, I won't repeat the joke about the user who is asked whether he read the End User License Agreement (his answer is, "Sure, it said click OK to continue") That's it. You must now restart Firefox to activate FireGestures; there's a link telling you to restart so go to it.

Using this extension, you can use your mouse to move forward or back in your history, open or close a new tab, and pretty much do anything you would with keystroke or by manually navigating the menu. There are many gestures pre-configured allowing you to work with FireGestures as is. Gestures are typically entered by clicking and holding the right mouse button and tracing out a path which is highlighted in green. The mouse trail color is of course configurable via the extensions preferences dialog which you can access with its own gesture (see Figure 2). Click and hold the right mouse button and trace left, then down, then right, then up, then left again (LDRUL) or click Tools, Add-ons, then select preferences from there.

Figure 2 : Open the FireGestures preferences with gestures; left, down, right, up, and left.

Once the preferences dialog appears, it provides you with three main sections via a top icon bar. These icons are labeled "Main", "Mapping", and "Advanced. (see Figure 3).

Figure 3 : FireGestures has a gesture activated preferences dialog.

Under the Main section, you may want to change the color of the mouse trails, change its size, or turn it off altogether. This is also where you can select the mouse button to use should you decide that the right mouse button doesn't work for you. One I find particularly useful is the gesture timeout. The idea here is that if you start a gesture and don't complete it in some defined period (a few seconds), the gesture is ignored.

To find out, change, or add to the list of gestures, click the Mapping button at the top. You'll see a huge list of common browser functions and the mouse movements needed to achieve them (see Figure 4). Each gesture is mapped using single letters to indicate direction. It's pretty easy to figure remember; up is "U" and down is "D", left is "L", and right is "R". Move forward with a right motion and back with left flick of your mouse. Zoom into the text by moving left, right, then up. Left, right, then down zooms out. It doesn't take long to get the hang of this and after a while, it all seems perfectly natural.

Figure 4 : Each gesture is mapped using single letters to indicate directions.

Down at the bottom of the mapping options, and on every tab, and on every tab actually, there's a blue link that says "Get Scripts". On the FireGestures site, there are a number of gesture sequences (or scripts) that do things not included in the extension's main package. Adding these scripts is pretty easy; make sure the mapping options are open, visit the site, then drag the script that interests you into the mapping window. It's that easy. If you'd rather create your own gesture, click the "Add Script" button.

I'll let you explore the advanced tab which gets into things like mapping the buttons, and using features like mouse wheels with left/right rocker buttons and so on. But for now, that's it. That is how you enable gestures in Firefox.


Giving Your Computer The Finger

John Underkoffler explains the human-computer interface he first designed as part of the advisory work for the film Minority Report. The system, called "g-speak", is now real and working. Note the gloves Underkoffler is wearing. (Image from Wikipedia)For as long as we've had personal computers, we've been trying to decide how we interact with these things. The mouse and graphical display was a major improvement over text only input, though I know some of you out there will argue for the keyboard to this very day (you know who you are). Nevertheless, every time we come up with something we like in the way people and machine interact (what we call HMI or Human Machine Interface), we decide that while it's okay, it still isn't quite right. That's okay. As my wife likes to point out, if people like something, they've want to change it; it's when they don't use it that you know you've lost them.

So it is with the humble mouse and the graphical user interface (which came out of Xerox PARC back in the 70s. It seemed like an awfully good idea . . . and it still is. When Apple released its own version of the graphical desktop inspired by Xerox, personal computing changed forever. Point here, click there, and magical things happen. Right click and menus pop up into which we dig ever deeper to make other things happen. The advent of clicking and dragging brought a real-life cause and effect onto the desktop's two-dimensional space. Hold on to this virtual object and drag it to a new location, or deposit this virtual object into this virtual container, whether it be a trash can or a file folder. Introducing motion into an otherwise static environment enhanced human-machine interaction. 

In the movie adaptation of Phillip K. Dick's "Minority Report", Tom Cruise stands in front of a virtual screen, manipulating the computer system using hand gestures to manipulate visions of future crimes, pulling this image here, setting that one aside, zooming in, pushing that one back, and looking for information on the individuals concerned. For techie geeks like me, that user interface was the real star of the show and years later, what I remember most clearly about the film. However useful such a user interface might be, it was seriously cool.

The inspiration for that gesture-based interface was designed by John Underkoffler, an actual product called the "g-speak Spatial Operating Environment", developed by his company, "Oblong Industries". Underkoffler also did some work on other visualization and interface techniques including holography and animation while at MIT. For a really cool demonstration, and a fascinating talk by Underkoffler, visit and pop his name into the search field.

The idea of gesture-based systems is obviously an attractive one because we keep exploring it. If you're seen "Iron Man 2", Tony Stark interacts with his own supercomputer via gestures  without special gloves. In this natural environment, the idea behind the tech becomes downright sexy. But Stark doesn't just use gestures; he also talks to the system in an almost conversational way while issuing commands as thoughts pop into his head. The system reacts to his speech and actions in an almost organic way, as though the system is just an extension of himself, much like his iron man suit. Too fanciful for you? A German group of scientists at Fraunhofer FIT have developed what might be called the next generation of human gesture based systems. Unlike Oblong's system, this three-dimensional interface doesn't require any special gloves, just like Tony Stark's system.

Ever since computers started coming into the hands of everyday users, we have been trying to reinvent the way people interact with these things. From inputting code via jumpers and switches, to keyboards, to the graphical UI that made Apple a household word (the company, not the fruit), it seems we can't ever find an interface we like. At least not for long. All of us work happily (more or less) with a keyboard and mouse, but it is limiting, hence all these fascinating developments into human machine interface design (HMI). We want to touch, wave to, pinch, tap on, and talk to our machines. This is, I believe, part of the attraction to the latest computing marketplace increasingly dominated by ever-smarter smartphones, iPads, Android tablets, and the BlackBerry Playbook. What could be more direct than touching in order to make things happen? It's natural. Reach out and touch.

From the humble mouse to touch screens to science fiction ideas like artificial intelligences that respond naturally to our speech, to direct neural interfaces as seen in the nightmarish Matrix, we keep looking for other ways to interact with computers.

How about you, dear reader. Are you ready to just plug in? What's your favorite interface between human and machine?


Subscribe to RSS - gestures