View Single Post
allnameswereout's Avatar
Posts: 3,397 | Thanked: 1,212 times | Joined on Jul 2008 @ Netherlands
#6
#3: Allowing user to edit media (could be extended to voice) before live upload not requiring a laptop/PC is good option for convergence. Sure, you could say you need Photoshop and big screen. The option to edit on laptop/PC has its own +/-, this is just an alternative to that option to edit on laptop/PC.

Allowing image manipulation before media is recorded is an option, just like during or after, but the power of 'during' I question because you also need to hold the device. Using hardware keys to control my digital camera or Nokia E71 is already difficult enough. Don't know if a touchscreen is user-friendly in this regard.

I also find that, when running a lot of software on my Nokia E71, the video recording is worse than without anything running (wobbly effect).

With a picture the situation is different from movie as image manipulation is either before or after. Its just that the picture-taking application might be able to process the data with the 'changes' (image manipulation) on the screen instead of saving them, giving user feedback of result. Much like a LCD screen on a digital camera.

One could do post-processing on video or picture like with GNOME Cheese! these plugins (effects) exist and are ready to be used.

I wonder, would this feature require a -rt kernel, much like JACK is required for audio processing, because of latency?

#4: Maybe some pictures are private or bad version. Needs moderation. Its just a matter of tagging (when uploading) or having each other contact information to share data afterwards. To do so in real-time requires standard protocols and applications (for example a cross-platform Qt application using Bluetooth, or 3G + service e.g. Ovi). It could also be possible the image is uploaded to Ovi with tags, and that users have pre-determined tags they scan for, or combination of metadata (username, realname, location, date/time, event_name, etc).

#1 and #2: reminds me of NFC (RFID) and QR, which brings me to RFIDGuardian (a mobile/embedded RFID firewall, controlled by other device such as a S60 phone (over SSL over Bluetooth IIRC)). I quite like these concepts, but with all concepts I wonder what kind of protocol would be used. If not RFID (not available on N900), then which protocol? Bluetooth? WiFi? Do you simply carry or share a QR or RFID which directs someone to your 'social network aggregator' which they then read (hence other person requiring data connection e.g. 3G)? How will you do ACL? I can already imagine people speeddating^2 based on their 'tags' whatever the protocol will be, there will also be a lot of spam and noise (compare with Twitter) so the user needs to actively use keywords to 'hit'.

Also, because all concepts interact with other humans instead of sheer data with all these concepts there is a chicken/egg problem. Either Maemo 5 is not rolled out wide enough for this to be used between communities, or its not a standard, cross-platform application ported to other platforms. I don't mean the application & protocol have to run on every smartphone or mobile device, but if it does not run on Symbian nor Windows Mobile nor Android/iPhone its not deployed. There is nothing particularly bad with this chicken/egg issue, its just something to keep in mind; don't put the goal too high, its just a fun experiment. To minimize chicken/egg, evade interaction with other devices or humans; instead focus on processing/parsing data. All these iPhone applications which parse API/XML or even HTML and show it in a nice, HIG'ed interface each provide an abstracted way to interface with [such] valuable data, making it easier than e.g. web browser. There is a lot of space for the 'GPS application' to allow I/O, e.g. to integrate metadata drawn from concepts in map context, allow part of map to be embedded, etc.
__________________
Goosfraba! All text written by allnameswereout is public domain unless stated otherwise. Thank you for sharing your output!