|
2009-11-25
, 12:34
|
Posts: 36 |
Thanked: 13 times |
Joined on Nov 2009
|
#302
|
That is absolutely not true. WindowsXP runs perfectly on 366Mhz Celeron. There is no jittering, scrolling is smooth- I use one as a file server (yes I am cheap.)
|
2009-11-25
, 12:37
|
Posts: 474 |
Thanked: 283 times |
Joined on Oct 2009
@ Oxford, UK
|
#303
|
|
2009-11-25
, 12:45
|
|
Posts: 2,427 |
Thanked: 2,986 times |
Joined on Dec 2007
|
#304
|
But that's exactly what the N900 browser does too. All the fluid animations, scrolling, etc are using a pixmap buffer, while the actual page rendering is happening in the background.
|
2009-11-25
, 13:16
|
|
Posts: 373 |
Thanked: 40 times |
Joined on Nov 2009
@ Norwich
|
#305
|
That is absolutely not true. WindowsXP runs perfectly on 366Mhz Celeron. There is no jittering, scrolling is smooth- I use one as a file server (yes I am cheap.)
|
2009-11-25
, 13:27
|
|
Posts: 373 |
Thanked: 40 times |
Joined on Nov 2009
@ Norwich
|
#306
|
That is absolutely not true. WindowsXP runs perfectly on 366Mhz Celeron. There is no jittering, scrolling is smooth- I use one as a file server (yes I am cheap.)
|
2009-11-25
, 13:29
|
Posts: 144 |
Thanked: 266 times |
Joined on Nov 2009
|
#307
|
The Following User Says Thank You to iJanne For This Useful Post: | ||
|
2009-11-25
, 14:24
|
Posts: 54 |
Thanked: 9 times |
Joined on Nov 2009
@ London
|
#308
|
Actually one more point on this. the 366 celeron in question is obviously x86. and x86 is a CISC (Complex Instruction Set Computer) whereas the ARM chips are RISC (Reduced Instruction Set Computer). This means that the x86 family has more at is disposal to get a task done, therefore will be quicker. Cant expect the same performance from a RISC architecture as in some cases it will have to execute more instructions to achieve the same computation an x86 chip could do with one instruction meaning more clock cycles get used therefore causing things to run slower.
|
2009-11-25
, 15:21
|
|
Posts: 44 |
Thanked: 50 times |
Joined on Nov 2009
|
#309
|
I start Holiday and parent mode tomorrow, so the earliest I'll probably get back to it is Monday. I used it for a few hours last night and I was very happy with it. It's only about 15 lines of code, mostly inserted JavaScript. There's plenty more to do, of course. For instance, there's no instant visual cue that it's happening, so one solution is what Apple did: an opaque animated overlay that eventually becomes transparent. Anyway, I'm motivated, and maybe the same time that Bundyo sorts out all his Fremantle/Diablo stuff, I'll have something more robust to insert into Tear and take it for a spin.
|
2009-11-25
, 15:40
|
Posts: 1,179 |
Thanked: 770 times |
Joined on Nov 2009
|
#310
|
Tags |
close me please, cry me a river, delete me, old thread, worstthreadever |
Thread Tools | |
|
The "more details" you refer to is approximately the number of pixels at the edges of each character - i.e. the edge length, which is roughly the complexity of geometry calculations if the page was rendered as a big vector scene. The total increases with zooming out, not decreases, because although the characters need less detail, there are many more characters; the latter dominates. (O(n) vs. O(n^2) thing with scale).
But most renderers treat small characters differently, as prerendered greyscale bitmaps in a font cache, and draw them as little images. That's much faster than drawing the vector shape each time. Whereas most renderers draw large characters as vector shapes, because they'd take too much memory to cache as an image, and it's probably quicker to draw as a vector shape anyway - because you can take advantage of solid colour fills.
That could be enough to explain the different behaviour. But I'm not convinced. The amount of time it took to draw (quarter of a second or so) was vastly longer than the time it normally takes to draw that much area, whether as bitmaps, full colour images or vector shapes for large characters.