Thread: Tizen?
View Single Post
Posts: 1,341 | Thanked: 708 times | Joined on Feb 2010
#184
Originally Posted by attila77 View Post
Let's say you have a large game. An oldschool rpm/deb process would be - download repository list, get package file, get dependencies of it, download all files (network heavy), extract all files (disk heavy), install all files (CPU heavy).

A next-gen approach would be - have a remote call to determine what is needed for the given package (yes, servers can and need to be smarter than just being a dumb http server), and then stream it - extracting it on the go and installing in parallel (I didn't mean installing multiple packages but having the download/extract/install phases happening in parallel) - being able to resume if connection lost. All the while it could skip getting parts it already has (say from a previous install, in rsync style manner) or doesn't need to minimize download size. Such a process would use less CPU, less network bandwidth and be faster than what is possible with yum/rpm (or apt/deb, etc). Yes, bold and slightly smelling of a brand new wheel, but would really be nice
Basically the logic would stay the same, but just the work load would be transferred to the server (cloud)? The server would know the dependencies of app X and would offer them to the client in the "magically known" correct order. The client would accept and install packages or reject if it notices some package is already installed.

Installing packages on the fly while transferring would had to be made in a way, that possible old files which would be overwritten would be saved, because in the end the package which was installed may turn out to be broken or compromised and the GPG-signed cheksum does not match. Or to use GPG-signed hash-trees in the packages and install parts from the first to last only after each would have been authenticated. Would increase the download sizes abit. Another way would be to allow only SSL-connections, and trust the server is never compromised, which is not wise.

Then there couldn't be pre-install scripts in the packages, or they would had to be GPG-signed separately. And pre-install scripts should all be aware that there may be old versions of the same pre-install jobs which has to be reserved. Or OS should run pre-install scripts in the sandbox and commit changes only if they do not overwrite anything old.

rpm's transactions support could be developed further to do the previous.

What if there is a conflict, so new software resource package B would overwrite some files from former installed package A? If it would be two different versions of the same library or resource, then resource packages should tag all their files with their version number, also files in /etc/ for example. Or OS should keep multiple versions of same files and know which libraries and programs depend on which resource versions. It would change the whole UNIX-way of for example using /etc-files and */*lib.conf files.

Post-install scripts have partly same problems as pre-install scripts if multiple installed versions of resource packages are supported.

Would waste space on the mobile device. If some lazy app developer never updates his program package, the old version of the resources the app depends on would had to be kept on the device as long as the app is installed.

Last edited by zimon; 2011-10-03 at 10:54.