Thread
:
Can someone tell me why N900 and not Android?
View Single Post
Capt'n Corrupt
2009-11-03 , 22:20
Posts: 3,524 | Thanked: 2,958 times | Joined on Oct 2007 @ Delta Quadrant
#
147
@john
Building on your wonderful idea of a multi-platform binary:
It would be interesting to meld the cross-platform characteristics of a bytecode language with the performance of natively compiled code (with the exception of assembly). How? Read on, my good friend!
The bytecode (eg. register-based dalvik) would be compiled (not a new concept at all, I am well aware!) to native code.
But this raises a very large problem. Doing a JIT on large applications or spread across many libraries could eat into performance and memory, especially in a constrained device like a low-power phone running on a 1999 architechture.
The solution is simple: caching. By caching natively compiled code, you gain a large benefit: that you needn't re-compile code for each run. The VM can do a quick check, a quick load, and an update to its offset tables upon execution, and you have the benefit of native execution speed derived from once-was interpreted bytecode.
But there's still a problem! Caching a complete application will invariably result in ~2x the storage requirements of a single app. This especially unnacceptable on a contsrtained device. But there is a solution to this as well!
A special class could be inherited when building an 'to-be-compiled' sub-classes. For these classes, although byte-code would be generated by the compiler, upon execution of the interpreter, a JIT compiler would recognize the class as 'to-be-compiled', compile to native code which would then be cached somewhere on the FS, and stored in memory for the duration of that process. The class could be empty, only its namespace as significant information, and thus be safely passed over by non-compiling interpreters.
The use of a special inherited class is key, as it allows the developer granular control over the code that would be natively compiled and run, which is adequate for speed critical portions of code (ie. inner-loops). Thus the interpreter would flag this code for cached-compilation, leaving the other code for regular vanilla interpretation. This prevents the code-bloat and potential performance losses associated with compiling the entire process bytecode for each instance and only focuses on the areas that are deemed necessary-to-optomize by the developer. The special class (or some similar mechanism) is used by the VM to differentiate code. Also, it maintains 100% backward compatibility with older bytecode that does implement have the special 'compile-flag' class.
This scenario requires a tiny bit of modification from the developers perspective, but could be made easy enough. The benefit though is huge: native execution speed from a universal bytecode.
It's of interest that Googles V8 javascript engine compiles Javascript directly to native code, completely bypassing the need for an interpreter, and increasing the speed of execution *significantly*. Considering that Javascript is a dynamic-typed language, a native compiled static-typed language like Dalvik bytecode should achieve even higher levels of performance, and there's little reason why it cannot match the performance of natively compiled C.
}:^)~
Quote & Reply
|
Capt'n Corrupt
View Public Profile
Send a private message to Capt'n Corrupt
Find all posts by Capt'n Corrupt