Active Topics

 


Closed Thread
Thread Tools
Posts: 148 | Thanked: 199 times | Joined on Nov 2009
#181
Originally Posted by ste-phan View Post
It's not the name that counts, its the strategy.

After crippling or at least delaying further development of Maemo / Harmattan in 2009 / 2010 Intel repacks and transfers the MeeGo virus into Tizen project and releases it on Samsung.

Samsung was on the verge of adopting WebOS but MS and Intel decided to weaken another competitor by the same successful MeeGo strategy. Samsung has swallowed the bait.

The real intel deal with Nokia was not in MeeGo but a long term strategy to bring Wintel to the Phone Area for good under the Nokia brand by end of 2012

Hence the delay of the Nokia Windows Phone of course, they need to ready the hardware , it has still some power consumption issues
I laughed, nice post. But you know? There's some truth here, namely that the whole MeeGo adventure might have slowed us down wrt. shipping the N9. The resources that were wasted on MeeGo could have been better used internally @ Nokia. And we would have had less public pressure, too (which lets you work more focused). At least that is true for myself.

MeeGo was simply not successful, as much as some of us might want to believe that.

Who knows - without MeeGo Nokia might not have been forced into Windows Phone at all* and Maemo would have lived on. That's what really saddens me.

*The state of MeeGo was a cruel joke already in February this year. Basic things weren't working and the attitude to rewrite/replace anything that came from Nokia also didn't help. Of course the MeeGo 1.2 schedule then slipped, and the MeeGo 1.3 schedule would also have been slipped. Remember how everyone @ the spring conference was expecting to see real MeeGo devices? Yeah, right. In the end, this chaos (and not the community work) is what managers see, and they get to make their decisions based on that.
 

The Following 2 Users Say Thank You to mikhas For This Useful Post:
Posts: 148 | Thanked: 199 times | Joined on Nov 2009
#182
Originally Posted by RFS-81 View Post
So, will this be the Last Great Thing, or just another episode in series of things to be abandoned for something better after a year?
No guarantees for that. It's a bet, if you will.

Originally Posted by RFS-81 View Post
From where does the need to get something finished come? For whom is this important enough to not throw away if it doesn't seem to fly at first?
The need to keep something alive comes as soon as you have business based on it and the business is profitable. But *getting* there is the risky part (i.e., above mentioned bet).
 
Posts: 183 | Thanked: 113 times | Joined on Jun 2010
#183
Originally Posted by mikhas View Post
*The state of MeeGo was a cruel joke already in February this year. Basic things weren't working and the attitude to rewrite/replace anything that came from Nokia also didn't help. Of course the MeeGo 1.2 schedule then slipped, and the MeeGo 1.3 schedule would also have been slipped. Remember how everyone @ the spring conference was expecting to see real MeeGo devices? Yeah, right. In the end, this chaos (and not the community work) is what managers see, and they get to make their decisions based on that.
That is the blunt truth, I remember actually installing the meego SF edition. My pupils had literally widen with shock as I clearly remembered the roadmap stating 1.2 as a stable version for vendors.

I do wonder, how did MeeGo get into this mess? is it lack of resources? over-optimistic schedule? unrealistic objectives? or maybe even bad leadership?

Even though i am not nearly close to Finland or Nokia, its not hard to see that detachment of sort between the decision makers and the crew leaders on the 'field'. MeeGo was built from dust, under x86 specs that Nokia didnt need, instead of using the already quite built Maemo 5.

The thing that I find most annoying is that i am 100% sure that putting Maemo 5, as it is ,on a 1GHz, 1GB RAM device will suit this OS much more than the N900 specs. Thus making this OS a good starting point.

In closing, one could argue that going on this deal with Intel was the bad move.
 
Posts: 1,341 | Thanked: 708 times | Joined on Feb 2010
#184
Originally Posted by attila77 View Post
Let's say you have a large game. An oldschool rpm/deb process would be - download repository list, get package file, get dependencies of it, download all files (network heavy), extract all files (disk heavy), install all files (CPU heavy).

A next-gen approach would be - have a remote call to determine what is needed for the given package (yes, servers can and need to be smarter than just being a dumb http server), and then stream it - extracting it on the go and installing in parallel (I didn't mean installing multiple packages but having the download/extract/install phases happening in parallel) - being able to resume if connection lost. All the while it could skip getting parts it already has (say from a previous install, in rsync style manner) or doesn't need to minimize download size. Such a process would use less CPU, less network bandwidth and be faster than what is possible with yum/rpm (or apt/deb, etc). Yes, bold and slightly smelling of a brand new wheel, but would really be nice
Basically the logic would stay the same, but just the work load would be transferred to the server (cloud)? The server would know the dependencies of app X and would offer them to the client in the "magically known" correct order. The client would accept and install packages or reject if it notices some package is already installed.

Installing packages on the fly while transferring would had to be made in a way, that possible old files which would be overwritten would be saved, because in the end the package which was installed may turn out to be broken or compromised and the GPG-signed cheksum does not match. Or to use GPG-signed hash-trees in the packages and install parts from the first to last only after each would have been authenticated. Would increase the download sizes abit. Another way would be to allow only SSL-connections, and trust the server is never compromised, which is not wise.

Then there couldn't be pre-install scripts in the packages, or they would had to be GPG-signed separately. And pre-install scripts should all be aware that there may be old versions of the same pre-install jobs which has to be reserved. Or OS should run pre-install scripts in the sandbox and commit changes only if they do not overwrite anything old.

rpm's transactions support could be developed further to do the previous.

What if there is a conflict, so new software resource package B would overwrite some files from former installed package A? If it would be two different versions of the same library or resource, then resource packages should tag all their files with their version number, also files in /etc/ for example. Or OS should keep multiple versions of same files and know which libraries and programs depend on which resource versions. It would change the whole UNIX-way of for example using /etc-files and */*lib.conf files.

Post-install scripts have partly same problems as pre-install scripts if multiple installed versions of resource packages are supported.

Would waste space on the mobile device. If some lazy app developer never updates his program package, the old version of the resources the app depends on would had to be kept on the device as long as the app is installed.

Last edited by zimon; 2011-10-03 at 10:54.
 
bergie's Avatar
Posts: 381 | Thanked: 847 times | Joined on Jan 2007 @ Helsinki
#185
Originally Posted by volt View Post
Traditional Linux ARM apps are harder to port, but they are powerful tools if someone manage to port them
You might find Emscripten interesting: it compiles normal C and C++ code to JavaScript and adds a Posix-compliant wrapper around it. With it you can already run stuff like Python or eSpeak in modern browsers.
 

The Following 2 Users Say Thank You to bergie For This Useful Post:
Posts: 3,319 | Thanked: 5,610 times | Joined on Aug 2008 @ Finland
#186
Originally Posted by bergie View Post
You might find Emscripten interesting: it compiles normal C and C++ code to JavaScript and adds a Posix-compliant wrapper around it. With it you can already run stuff like Python or eSpeak in modern browsers.
That's an abomination What next, QEMU in JS ? :P


... to revisit one tidbit tho:

Originally Posted by mr_jrt View Post
Strange...on my desktop Debian install it works brilliantly with ~29050 binary packages.
time apt-get update:
...
Fetched 13.9 MB in 49s (281 kB/s)
Reading package lists... Done
Command being timed: "apt-get update"
User time (seconds): 11.10
System time (seconds): 0.50
Percent of CPU this job got: 23%
Elapsed (wall clock) time (h:mm:ss or m:ss): 0:49.54
Average shared text size (kbytes): 0
Average unshared data size (kbytes): 0
Average stack size (kbytes): 0
Average total size (kbytes): 0
Maximum resident set size (kbytes): 15808
Average resident set size (kbytes): 0
Major (requiring I/O) page faults: 0
Minor (reclaiming a frame) page faults: 30862
Voluntary context switches: 13521
Involuntary context switches: 2412
Swaps: 0
File system inputs: 0
File system outputs: 159096
Socket messages sent: 0
Socket messages received: 0
Signals delivered: 0
Page size (bytes): 4096
Exit status: 0


attila@t410:~$ du -s -c /var/lib/apt /var/lib/dpkg/
83728 /var/lib/apt
68468 /var/lib/dpkg/
152196 total


On mobile, this simply won't do (13.9MB network traffic, 11 sec user time on a dual-core core i5, 160K file IO requests... just to find out if there is something new - on a repository that is almost TWO ORDERS OF MAGNITUDE smaller than what you can expect in mobile).
__________________
Blogging about mobile linux - The Penguin Moves!
Maintainer of PyQt (see introduction and docs), AppWatch, QuickBrownFox, etc
 
bergie's Avatar
Posts: 381 | Thanked: 847 times | Joined on Jan 2007 @ Helsinki
#187
Originally Posted by attila77 View Post
That's an abomination What next, QEMU in JS ? :P
Here you go.
 

The Following 2 Users Say Thank You to bergie For This Useful Post:
Posts: 249 | Thanked: 277 times | Joined on May 2010 @ Brighton, UK
#188
Originally Posted by attila77 View Post
time apt-get update:
...
On mobile, this simply won't do (13.9MB network traffic, 11 sec user time on a dual-core core i5, 160K file IO requests... just to find out if there is something new - on a repository that is almost TWO ORDERS OF MAGNITUDE smaller than what you can expect in mobile).
On my ancient Debian server tracking Debian testing:
Code:
time apt-get update
<snip>
Fetched 271 kB in 7s (34.5 kB/s)
real    0m12.073s
user    0m2.812s
sys     0m0.336s

du -s -c /var/lib/apt /var/lib/dpkg/
58392   /var/lib/apt
29084   /var/lib/dpkg/
87476   total
...so I'm not sure what you're doing wrong...but 271kb is fine by me!
 
Posts: 3,319 | Thanked: 5,610 times | Joined on Aug 2008 @ Finland
#189
Originally Posted by mr_jrt View Post
[CODE]time apt-get update
...so I'm not sure what you're doing wrong...but 271kb is fine by me!
I didn't do anything wrong, but I did clear my package cache 271kB is in your case the size of the Packages files that were updated - it obviously doesn't download what has been downloaded before (if you did apt-get update again, it would say 0 bytes). In some setups, apt would get DiffIndex-es to cut down on size - but this does not help much in a global appstore context, where you have, say, at least a 1000 new packages a day. If we used metadata Maemo-style, the AppStore's Package files would be around 10GB.

Originally Posted by bergie View Post
Here you go.
I believe that project wiped out half of the world's puppy population singlehandedly
__________________
Blogging about mobile linux - The Penguin Moves!
Maintainer of PyQt (see introduction and docs), AppWatch, QuickBrownFox, etc
 

The Following 3 Users Say Thank You to attila77 For This Useful Post:
Posts: 1,298 | Thanked: 2,277 times | Joined on May 2011
#190
Originally Posted by maakoi View Post
Actually isn´t html5 cross-platform? No gtk, qt, proprietary bs, works (hopefully) on every platform?
Maybe some day we can choose whether we want a M$ kernel or linux, and all the apps will work. Those who want a slower device with more crashes and that costs and that cripples your freedom, can choose the monopoly/mafia os and the rest can choose Linux.

Maybe the desktop distros should start thinking about html5? The sofware companies could finally make really crossplatform games and other apps. Don't worry, it will never happen...
HTML5 is "cross platform" with many caveats in browsers differences. And those are a pain at present. It's not mature yet to be really solidly cross platform. While having it as an option is good, making it the only promoted option is bad. At least at present. And JavaScript is not suitable for every use case. WebGL is way behind in performance on mobile yet, so using native code has its benefits. (And it's not more crossplatform than OpenGL itself by the way, so bad luck finding it on some devices).

Last edited by shmerl; 2011-10-05 at 17:24.
 

The Following User Says Thank You to shmerl For This Useful Post:
Closed Thread

Tags
déjà vu, tizen


 
Forum Jump


All times are GMT. The time now is 11:23.