View Single Post
Posts: 323 | Thanked: 116 times | Joined on Jul 2010
#16
Theoretically, and therefore also in practice, interpreted code by its very definition can never be faster than native code.
That's only half of the truth.

In fact the running of a program depends sometimes heavily of the data given on runtime.
Passing these parameters to procedures can be a heavy load. On the other hand code can be optimized for special cases, determined by circumstances given at runtime.
(For example: In some cases you know that one test is useless or that one procedure will abort without result.)
Also branches can be predicted.

A classical compiler doesn't know the circumstances at runtime. It has to compile a general program open to all possible circumstances.
That isn't efficient.

The JIT (Just in time compiler) compiles code just before execution and knows much more about the special runtime conditions. The code can be optimized much more on special conditions.

The result is that the code compiled just in time can (depending on the circumstances) be much faster than a classical compiled program.


AFAIK the bottlenek of speed is the throughput to the CPU.
It can be heavily optimizedd if some conditions or parameters are known and heaven't to be examined in the CPU again.
If there is a loop this can create very efficient code.

Modern processors (like ours) are able to process bytecode as a kind of "virtual machine language". You can formulate inside the CPU with a special language your own "virtual machine language" (e.g. Java Bytecode, DEX, .net-Bytecode). Thumbee and thumb2 are a very advanced architectures (but actually not used with n900).


Ergo: It is even possible that Java is in some cases faster than compiled code.
 

The Following User Says Thank You to gerdich For This Useful Post: