View Single Post
Posts: 1,341 | Thanked: 708 times | Joined on Feb 2010
#114
Originally Posted by attila77 View Post
Zimon, theory is one thing, reality is another (otherwise we'd be using lisp or prolog ). 5-10 years after the papers you mention and a few million (billion?) $ thrown at it by Sun and Google, it's still 'not there yet'. Android giving in to the pressure and being polluted by native code tells the story pretty well. And people wanting to earn their bread and/or geek creds writing code are doing it today - we're talking about *today's* technologies, not the hypothetical performance in some unclear point in the future.

PS. As for time critical - in mobile space 'slow' translates into 'power-hungry' (because it goes against race-to-idle, etc).
I mainly meant "deterministic response time" with that "time critical". There obviously are applications, device drivers and parts of the some applications (game 3D-engine) where we do not want VM or CG start to do code morphing, heap memory re-organizing or stuff like that but we want that part of the code run exactly as fast as in every other session.

And its true, theory is one thing and reality is another.
One can of course make all the running time profiling and heap memory optimizing and code morphing also writing in Assembly, or in C++/C; but that is not practical as one should also make one kind of VM and CG then and for example avoid direct pointers in the code.

Or, yes, one can trace some bytecode applications whole life time in a JIT-VM, and then output an Assembly source code of that, make one even tiny optimization by hand, and claim Assembly and fully compiled code is always faster than interpreted code. But on many applications once again the fully compiled program would in fact include some kind of VM and CG.

Or in theory, someone could fix the heap memory defragmentation problem with C++ programs, but once again he should write some kind of CG and use code morphing or avoid using direct pointers.

I do not know what modern VM optimization techniques are in use in Google's Dalvik VM, but I would guess they haven't yet reached the perfection in that, because there has been other issues to tackle first; like those changes they made to Java VM and also trying to go around Java license restrictions for mobile platforms.

In desktop Java some of the modern VM optimizations are already done, but not all. There is still space for improvements.

Another, good link explaining why interpreted code can be faster than fully compiled one, beside the two I already mentioned and which may be too technical and theoretic, is this:
http://scribblethink.org/Computer/javaCbenchmark.html
(It is kind of "old" also, but the facts haven't changed since then.)

Here is a relatively fresh benchmark test between Java and C++:
It's a tie!
http://blog.cfelde.com/2010/06/c-vs-java-performance/


Java VM is still getting better and faster. C++ optimizations are pretty much used already and not much else to do, unless CPU-manufacturers make Transmeta-type of features in CPUs, where MMU is made much more clever so for example heap memory de-fragmentation and object reorganization for L2-cache are done inside CPU.
 

The Following User Says Thank You to zimon For This Useful Post: