View Single Post
ARJWright's Avatar
Posts: 861 | Thanked: 734 times | Joined on Jan 2008 @ Nomadic
#625
In watching Engadget's video, I think we find exactly what's happening. There's a different - probably one could use the term evolved - perspective towards gestures that Jolla has taken that is indeed pushing the "pictures under glass" metaphor a lot further than some can be comfortable with. Part of that is a given due to how new it is, but another part of that is that there are expectations towards computing which might feel simple, but are probably so learned that they are harder to divorce from than they are to just be taught (Ars had a piece on this wise today).

The challenge in the UI, that was actually well spoken in the video - and that Ive's iteration of iOS7 has narrowly missed - is that because of the task-first methods of interaction that mobiles have done for so long, its very hard to do an interaction model that plays in space just as much as it plays *for* a task. That's a mental hurdle that's easy peasy to those who deal with assistive technologies often, less so for those persons who see their base technology uses as not being assistive at all, but being normal. Blackberry could have gone further and it would have been a similar reaction - even with the messaging-then-gesturing approach.

Refinement so far with Sailfish means to make it look/act like what I remember that it should be. Shame we forget what machines do to teach us, because we'd have a different perspective of this and other experiments (that aren't copies of the things we've learnt - FirefoxOS, Ubuntu Mobile, etc.).
 

The Following 10 Users Say Thank You to ARJWright For This Useful Post: