View Single Post
Posts: 167 | Thanked: 204 times | Joined on Jul 2010
#6
Originally Posted by magick777 View Post
Disclaimer: these first two tests are actually performed on 2 different N900s (though both are running exactly the same software and kernel). I might swap the cards over and see how reproducible the results are.
Sandisk results: http://sebsauvage.net/paste/?aa5675a...4wIssSIy+5cO0=

This raises concerns as the same test completes almost 20% faster on my backup phone, yet with a lower throughput and higher CPU usage. I'm inclined to rule out a fundamentally lower CPU speed because the create/stat/unlink tests are doing proportionately more work in less time, so, perhaps there was competition for CPU cycles or I/O on the primary phone? Something ain't right here, variances of up to 20% on the same SD card don't give me confidence that I've properly controlled the test conditions. However, there shouldn't be much difference in the software on the two phones as the backup phone was cloned from the primary a few hours ago.

Fine, then, let's see if this performance difference between my two phones is reproducible...

Toshiba results: http://sebsauvage.net/paste/?b2e0b36...uUpZBIDHPZE98=

Right, so the (presumed 20% slower) primary phone tests the Toshiba card 25% faster and with some broadly similar results; in this case, though, it's the primary phone that's using more CPU on the file creation tests and doing more work in consequence. This would seem to suggest that there is no massive fundamental performance difference between the two phones, and the anomalies in my results on the Sandisk card must be down to not adequately controlling some other I/O or CPU load while testing. In other words, this testing is not much use without some kind of control for environmental factors.

My prime suspects are either modest or trackerd running in parallel, as I can't think of anything else that would be consuming I/O or CPU on both phones. Not sure whether to look at running bonnie with real-time priority, locked CPU speed or multiple test runs; this might yield better testing but is not representative of the conditions under which cards are actually used.

A preferred option - if enough users will submit data - is control by numbers, i.e. log results against a given card CSD and compare and contrast accordingly, optionally including the best and worst 10% of results and averaging the rest. However, that takes a long time and a lot of benchmarks before it yields any useful data, and I suspect that if the will were there, it would have already happened. Thoughts?
 

The Following User Says Thank You to magick777 For This Useful Post: