Why Windows 7 will never be a touch-friendly OS

I’ve been looking at a number of touch-enabled Windows 7 computers recently, both desktop machines and tablets, and I’ll have to confess that I’m finding the whole touch experience in Windows 7 worryingly underwhelming. The almost inescapable conclusion is that Windows 7, despite its strengths and capabilities, falls a long way short of being a credible touch platform.

Why Windows 7 will never be a touch-friendly OS

There are a number of reasons for this. First, lets start with the multitouch issue. You might not realise this, but most Windows 7 touch-enabled devices actually only work with one or two simultaneous touch inputs: they can’t handle three or four fingers at the same time.

Wouldn’t it be nice if Microsoft were to lead in gesture interfaces, rather than trail behind with an adequate but disappointing solution?

To check this, go into the Windows System Information page, where it will tell you that Windows 7 is touch enabled, and how many simultaneous finger touches it can manage. It seems that two is enough for most of Microsoft’s current requirements, as evidenced by this Microsoft posting.

You can see that nine touch gestures are defined – tap and double-tap, panning with inertia, selection/drag, press and tap with second finger, zoom, rotate, two-finger tap, press-and-hold, and flicks. This set of gestures certainly represents a useful set of core capabilities, but they all require two fingers at most.

It’s hard to say why Microsoft has done it this way. It would be easy just to trot out the hackneyed opinion that this is a typical half-baked Microsoft solution, but I suspect that isn’t the case this time. I think it genuinely believes that two fingers are enough and that more than two is too confusing (just like Apple once thought about one versus two mouse buttons).

Therefore, it’s interesting to discover that some Windows 7 devices are shipping with support for more than two-finger gestures. Although I don’t have one to hand, a good friend tells me that his Dell Latitude XT2 has support for four-finger gestures. Some digging showed that this machine ships with multitouch drivers from a company called N-trig, which specialises in multi-finger gesturing for Windows. The website lists a small number of computers for which the firm has written true multitouch drivers, both for Windows 7 64-bit and 32-bit, as well as Vista and XP. Apparently, the machine shipped with these N-trig drivers from new, and there have been several updates to the driver set since then, so this is a company worth looking at if your hardware is supported.

However, wouldn’t it be nice if Microsoft were to lead in gesture interfaces, rather than trail behind with an adequate but disappointing solution? I’m told that the underlying OS supports up to ten fingers, but if the majority of the current peripheral hardware doesn’t, then who’s going to write truly multi-finger applications any time soon?

Precision required

Let’s move on to a more important limitation of Windows 7 with regard to touch, especially for tablets but also for desktop machines. The truth is that Windows has always been designed to work with a precision input device – namely, the mouse. This is, in effect (to use a simplification that will make seasoned Windows programmers cringe), a pointer to one specific pixel on the screen, so that when you click on an object, the OS knows exactly where you clicked.

There’s no room for error, no blurring or uncertainty – you either clicked on that object or you missed it. Windows has been tuned to this design schema, and the hardware has become better and better over the years – just compare the performance of a modern laser tracking mouse to a ten-year-old clunker with a rolling ball.

Instead of a mouse pointer, if we now place a finger over that same screen button the fingertip will generate a huge splodge of positional data, and the system software has to work out the most likely centre of this splodge and call that the click point. This isn’t a particularly difficult problem in itself – the maths isn’t too difficult and well within the capabilities of a modern PC to calculate in real-time. The problem is the size of the button itself – if it’s too small, the fingertip will completely cover it, so that the user has no confidence that the button has been correctly pressed. At the very least, the user will be unable to see the button change colour to indicate its change of state.

The solution – fairly obviously – is to make the buttons bigger. In fact, you need to make all of the screen furniture larger, and more “finger-friendly”. But, at this point, we crash headlong into an issue that has plagued Windows for 20 years. Its screen resolution, in terms of pixels to the inch, has been stuck at 96 pixels per logical inch (let’s call it 96ppli) since the earliest days of the VGA (for more on what logical inches actually are, visit emdpi.com.

If you want, you can tell Windows to increase the number of pixels per logical inch: a typical value would be 120, which was used back in the mid-1990s for the IBM 8514/A display, which is why it’s sometimes known as the 8514A setting. However, your screen has a fixed number of pixels because that’s how flat-panel screens are made, so increasing the number of pixels per inch means that the logical “system” inch gets bigger – in other words, you’ve zoomed everything.

This is fine up to a point, as your buttons are bigger and more usable when touched with a finger tip. Text gets bigger, too, so everything can look zoomed-in, which can be useful. The problem is that you still have a dumb button that’s simply bigger. Worse still, many applications are never tested at 120ppli, so all sorts of nasty text clipping and cropping may happen when the application attempts to lay out everything in 96ppli, while the OS is trying to force 120ppli.

Disclaimer: Some pages on this site may include an affiliate link. This does not effect our editorial in any way.

Todays Highlights
How to See Google Search History
how to download photos from google photos