View Single Post
Posts: 2,225 | Thanked: 3,822 times | Joined on Jun 2010 @ Florida
#34
Originally Posted by Estel View Post
I second idea with systemwide, with activation on proximity sensor cover. Disabling click on hildon-home is a no-go, it would be PITA when using external mouse (or any mouse-like thing) via USB hostmode or bluetooth.
Proximity Sensor, every time it's polled, uses battery power. Since it's a basic rudimentary is-it-bright-enough-to-assume-I'm-open hardware piece, if you poll it at all often, you do get a slightly noticeable increase in battery drain.

So as nice as proximity sensor activation is - make it configurable. And, if the gesture detection is done RIGHT, this wouldn't ever be necessary. All the background gesture detection daemon SHOULD do is process screen events and report to the rest of the system (over DBus or something else if there's more efficient channels, idk) that gesture [gesture number/name/whatever] was detected.

Then applications should set up their own 'listeners' to such events, if they want to take advantage of specific gestures. You could, as part of the project, however, provide hildon-desktop/home patches to handle some of the gestures by default (I'm thinking swipes from bezel in either direction could be reserved for hildon-desktop, or at least up/down swipes). This would probably get CSSU'ed eventually if stable and good enough.

I would say the minimal requirement for gesture detection should be swipe-from-bezel from each side (and if you want to go that far, from the corners of the bezel; optionally from screen-into-bezel, or bezel-to-bezel, but be careful with the bezel stuff because, as I say below, there's no touch-sensitivity in the bezels afaik, so you'd really be watching for gestures that begin or end at the very EDGE of the screen, which could interfere with normal functionality, like dragging text selection with the mouse in MicroB to the edge of the screen to select more text than fits on the screen at once - so I personally vote against screen-to-bezel, though bezel-to-other-bezel would provide some more options for programs to use), and the clockwise/counterclockwise rotation. These gestures could then be used by any application, hildon-desktop/home included. As far as I understand such an implementation would be perfectly compatible with MicroB and the like using gesture detection for the same gestures, since those are built into the app UIs by Nokia. BUT, it means that now an open source recode of the MicroB UI could be more doable, because you can pull the gesture detection from the system-wide daemon once that's out instead of writing an in-app one.

Keep in mind that N900's screen bezel isn't touch-sensitive, so the swipe-from-bezel gestures would register as swipe-from-the-very-very-edge-part-of-the-screen-in.

But I'm rambling, back to replying to stuff:
Originally Posted by Estel View Post
There already is 100% working daemon for recognizing proximity sensor state (not using much resources, in fact, almost none at all), so this part of idea may be considered done. It's "only" matter of incorporating feature.
I still say that should be configurable and off by default, because properly done system-wide gesture detection wouldn't interfere with normal use, if you didn't try to feature-bloat it with gestures that are too similar to other hypothetical movements. The program you're in would retain normal functionality UNLESS it was patched to use one of the gestures. So drawing a spiral in MyPaint wouldn't trigger clockwise-circle-gesture events unless MyPaint had code to do something on that gesture. You can watch what app is in focus, and only send that app and white-listed system 'programs' like hildon-desktop/hildon-home the events (or don't white-list it, but let programs register over either the system or session bus for dbus depending on if they want to detect gestures when not in focus). OR you can expect the apps to incorporate checking if they are in-focus before using the gesture, which I think would be the easier and more flexible approach, though developers would have to be informed accordingly. Either way, you don't handle any of the RESULTS of the gestures from the gesture daemon - the gesture daemon should ONLY handle the gesture detection and event announcing over DBus or other interface.

In turn, if you incorporate some of the gestures into system-wide UI functionality, e.g. having hildon-desktop detect a swipe-up-from-bottom-bezel and return you to the task switcher, to use the WebOS/BB Playbook gesture example, it should be something you are very confident wouldn't get triggered during regular use of an app - so swipe-up-from-bezel would be one such example, swipe-down-into-bezel or figure-8-swirl (if you have such gestures) would not be.

Originally Posted by Estel
BTW, as it was mentioned in some huge thread long time ago, it *is* possible to create multitouch ffor resistive screens. When you touch resistive screen in 2 place, actual "sensed" place is exactly in center between them (so, if You touch left and right part of the screen, device will sense 1 touch on center). Using some complicated algorithm, it's possible to recognize when there are many touch points at once, or it was just quick change of touching place. Someone (maybe qole, but I don't know for sure) ever mentioned devices, that incorporate that.

If something like *this* could be done, that would be real overkill.
Problem is, there are flaws even that way. If you make two touches instantaneously, it'll still think there's one touch, or if you make two touches right after each other, it would easily interpret that as a multi-touch gesture (with the center being the second real touch) because now the algorithm goes touch a was here, then it suddenly jumped to touch b, which leads me to conclude there's a touch at touch a and a second touch c which is calculated from those two touches. Where-as in reality all you did was have very fast clicking going on.

Something like it is doable, but it would be messy. Most of the code would probably be complicated extra 'error handling' trying to guess-work out which touches where consecutive and which were multiple.

This is something I wouldn't mind getting into since I'm slightly less incompetent at C now (read: I'm extremely incompetent, but extremely < completely). The general idea shouldn't be too hard, once you know how to get the screen inputs with C (I don't, though); then you just do some basic arithmetic on the coordinate of the touch after every change, and for more complicated gestures like swirls, you'd need to break out some fancy maths, but I have that buried in me somewhere from my calculus learning days.
 

The Following 3 Users Say Thank You to Mentalist Traceur For This Useful Post: