View Single Post
juise-'s Avatar
Posts: 186 | Thanked: 192 times | Joined on Jan 2010 @ Finland
#48
Originally Posted by ysss View Post
but less accuracy..?
I'm going way off topic, but we're there already, so here goes.

I think the popular implementations of both technologies reach sufficient and comparable accuracies from the hardware.

The question then becomes, how to make use of the information provided by the input device. This in my opinion is what can make or break the illusion of the accuracy of the input, and the "smartness" of the device.

One thing I dislike about the N900 touch screen input processing, is that the input given by the touch sensor is taken literally by the OS, as it would be if I was using a mouse.

E.g. There's one small link in a middle of a web page I'm viewing, and it's the only clickable item on the screen. I try to click it with my thumb, and in the process, cover it from my view. What happens, is that I miss by 2 pixels, because the thing covered by my thumb and I cannot see it. Now, the OS knows I clicked, and makes the click sound. It just isn't smart enough to know I tried to click on the link 2 pixels away, making both it and me look stupid.

In some way, this developer lazyness maybe caused by the resistive touchscreen technology, as it gives an "exact" point which was touched. Capacitive, OTOH, senses the whole touched area (and a bit more), and it's up to the OS driver developer to find the center of that area (or if there were actually more than one area, meaning multitouch). While doing that, it's a natural thing to check if there was anything clickable inside the *area* that was touched and activate that, making it act closer to the "do what I think" ideal.

Given that UI elements are far enough from each other, there's no reason why a "larger touch area" could not be implemented and properly working on a resistive screen. Someone's just been slacking off.
 

The Following 3 Users Say Thank You to juise- For This Useful Post: