Reply
Thread Tools
Posts: 1,746 | Thanked: 2,100 times | Joined on Sep 2009
#11
Originally Posted by tso View Post
on the topic of java, ARM cores have had a java accelerator for ages, thanks to j2me being "popular" on featurephones (with opera mini being the best known example).
They do, and the chip in the N900 includes it, IIRC. However, the output of the Android compiler targets Dalvik, which isn't compatible with the standard Java runtimes or most hardware Java accelleration.
 
tso's Avatar
Posts: 4,783 | Thanked: 1,253 times | Joined on Aug 2007 @ norway
#12
oh, nuts. More crazy from andy rubin i guess.
http://en.wikipedia.org/wiki/Andy_Rubin

the guy seems to run the android division as some kind of mini-apple, with himself as jobs, but the looks of it.
__________________
Be warned, posts are often line of thoughts at highway speeds...
 
Posts: 12 | Thanked: 5 times | Joined on Apr 2010
#13
Originally Posted by zimon View Post
It is a modern (compiler science) way to have interpreted code in applications instead of fully compiled one.
Theoretically (and I believe in practise also in the future) interpreted code can be faster, more power efficient and less buggy than fully compiled one. There needs to be only one VM in RSS memory and all applications can use its codebase.
Device drivers and kernel is a different thing, but for applications it makes sense.
Speed, power efficiency and bugs have nothing to do with the way Java was designed. The reason why Java code is not fully compiled and uses a VM instead to run the bytecode is portability alone, or in other words: write the code once and use it everywhere no matter the hardware architecture that is under the hood (which is what the VM is for).

Cheers.
 
Posts: 31 | Thanked: 35 times | Joined on Jun 2010
#14
I am sorry, I have to correct this.

Originally Posted by zimon View Post
It is a modern (compiler science) way to have interpreted code in applications instead of fully compiled one.
A virtual machine simply emulates another machine architecture in software.

It's a computer science way of implementing an abstract hardware platform that insulates us from physical hardware issues. The VM machine code, called "bytecode", is also compiled and we call it "interpreted" because it's not run by a real processor but a software emulator which may be higher level.

Originally Posted by zimon View Post
Theoretically (and I believe in practise also in the future) interpreted code can be faster, more power efficient and less buggy than fully compiled one.
Theoretically, and therefore also in practice, interpreted code by its very definition can never be faster than native code. There are other advantages to using a VM but this ain't it.

As a VM emulates another architecture in software, the best it can ever hope to achieve is speed just below the native code. A machine that executes code that executes code is slower than a machine that just executes code, period. The fastest VMs aren't really VMs per se, they don't interpret the bytecode but convert it to native code (JIT) and run it on the real CPU.

It cannot be more power efficient either, for emulation overhead means extra power usage.

It won't be less "buggy" as far as functionality goes. It usually does have much fewer security holes, because the VM is an effective sandbox and I suspect this is one of the reasons it was chosen here (other being hardware abstraction).

Originally Posted by zimon View Post
There needs to be only one VM in RSS memory and all applications can use its codebase.
Remember, there needs be nothing extra in memory to run native code, the CPU and hardware are already there.

A VM is easier to work on because, at least theoretically, you develop for a fixed generic platform and don't have to deal with hardware variations. In practice this write-once, run anywhere concept has failed miserably IMHO (witness all the bastardized mobile Java implementations and how we can't run Android apps on a standard, fully implemented JRE).
 

The Following 2 Users Say Thank You to wotevah For This Useful Post:
Posts: 323 | Thanked: 116 times | Joined on Jul 2010
#16
Theoretically, and therefore also in practice, interpreted code by its very definition can never be faster than native code.
That's only half of the truth.

In fact the running of a program depends sometimes heavily of the data given on runtime.
Passing these parameters to procedures can be a heavy load. On the other hand code can be optimized for special cases, determined by circumstances given at runtime.
(For example: In some cases you know that one test is useless or that one procedure will abort without result.)
Also branches can be predicted.

A classical compiler doesn't know the circumstances at runtime. It has to compile a general program open to all possible circumstances.
That isn't efficient.

The JIT (Just in time compiler) compiles code just before execution and knows much more about the special runtime conditions. The code can be optimized much more on special conditions.

The result is that the code compiled just in time can (depending on the circumstances) be much faster than a classical compiled program.


AFAIK the bottlenek of speed is the throughput to the CPU.
It can be heavily optimizedd if some conditions or parameters are known and heaven't to be examined in the CPU again.
If there is a loop this can create very efficient code.

Modern processors (like ours) are able to process bytecode as a kind of "virtual machine language". You can formulate inside the CPU with a special language your own "virtual machine language" (e.g. Java Bytecode, DEX, .net-Bytecode). Thumbee and thumb2 are a very advanced architectures (but actually not used with n900).


Ergo: It is even possible that Java is in some cases faster than compiled code.
 

The Following User Says Thank You to gerdich For This Useful Post:
Posts: 13 | Thanked: 6 times | Joined on Nov 2009
#17
the biggest win for meego would be to have QT support apps written for android or iOS. i.e. if QT can use the code wirtten for iOS or Android to create apps that can run in meego. if nokia can get this done there is no stopping meego..
 
javispedro's Avatar
Posts: 2,355 | Thanked: 5,249 times | Joined on Jan 2009 @ Barcelona
#18
Originally Posted by wmarone View Post
They do, and the chip in the N900 includes it, IIRC. However, the output of the Android compiler targets Dalvik, which isn't compatible with the standard Java runtimes or most hardware Java accelleration.
No, the Cortex A8 does not have Jazelle DBX. It only has Jazelle RCT which is language independent as long as it is JITed (because it defines its own machine code, which they say it's broken on the OMAP3, btw).

On the other side, the A9 brings Jazelle DBX support back, and I have to wonder why..

Last edited by javispedro; 2010-09-09 at 18:50.
 
Reply

Thread Tools

 
Forum Jump


All times are GMT. The time now is 12:17.