Active Topics

 


Reply
Thread Tools
Posts: 167 | Thanked: 204 times | Joined on Jul 2010
#1
With a bunch of new 64GB cards on the market, it would be nice to share some comparable benchmarks on how different microSD cards actually perform in the N900. This thread seeks first to define what is a useful, comparable benchmark, then to propose a standardised, non-destructive benchmarking method using tools available on the N900, and finally to ask users to contribute some benchmarks.

Criteria for simple but useful benchmarking

We seek to test the performance of a given microSD card under common and comparable conditions, which should also be non-destructive and readily accessible to a majority of users. That means no repartitioning, so for the purposes of this comparison we'll have to be content to benchmark the performance of a FAT32 file system on our microSD. This can still yield good comparative data, if done under controlled conditions, and maybe a separate thread is in order for benchmarking non-standard configurations?

Testing should take place under Maemo 5 on the N900 itself, and not introducing USB, Ethernet, WiFi or any other corrupting factors such as a host computer or operating system. This implies using one of the standard filesystem benchmarking tools available for the N900, probably either bonnie++ or iozone. I'm playing with both at the moment and I would welcome informed argument in favour of either one.

But, so far, it looks like we're talking about performing a bonnie++ benchmark under at least vaguely controlled conditions, then combining this with sufficient information about the microSD card being tested as to form a credible, comparable report. Opinions invited as to what information you'd want included in such a report for it to be "useful".

Proposed testing & reporting methods

Merely running bonnie++ against a FAT32 filesystem on a microSD card in an N900 should yield some useful data if enough people will oblige, however, there's no control for variations in CPU speed or filesystem fragmentation. What non-invasive actions should we sensibly take in the way of controlling our environment before running the benchmark? In particular, should we lock the CPU speed (to 600MHz) when we detect kernel-power, or should we just assume that the users we care about will (mostly) be running 900MHz + SmartReflex?

Also, the resultant report is only useful if it contains enough information to have some faith in the identification and the authenticity of the microSD card and to explain any anomalies. What information should we collect and report along with the benchmark?

I'm not going to post up a test script or benchmark output just yet, but I'd welcome input as to what to test and how to test it with a view to throwing together a simple script that others could use against different microSD cards to produce useful results. My thinking at the moment is to base the testing on bonnie++ and grab the SD card information from lshal; what else do we need?

When I've thrown a provisional script together I'll test my current Sandisk 16GB class 2 as a standard against which to compare...
 

The Following 4 Users Say Thank You to magick777 For This Useful Post:
Estel's Avatar
Posts: 5,028 | Thanked: 8,613 times | Joined on Mar 2011
#2
Ok, it's a useless bump, but I feel it would be great shame, to lose such well-thought thread, buried somewhere and forgotten.

I totally agree with idea of establishing simple, yet standardized criteria for benchmarking (micro)SD cards - just like for batteries in battery thread. Especially, that manufacturer's data are mostly useless, and raw batch speed are irrelevant in case of running OS'es from card/using it as swap.

BTW, such benchmarks could be useful not only for us - basically, same criteria apply to all things, where random access speed is much more important that batch, raw measurements.
---

Gurus, could You please write few basic ways to measure most important things (for given tasks, + list them?

/Estel
__________________
N900's aluminum backcover / body replacement
-
N900's HDMI-Out
-
Camera cover MOD
-
Measure battery's real capacity on-device
-
TrueCrypt 7.1 | ereswap | bnf
-
Hardware's mods research is costly. To support my work, please consider donating. Thank You!
 

The Following User Says Thank You to Estel For This Useful Post:
Posts: 1,141 | Thanked: 781 times | Joined on Dec 2009 @ Magical Unicorn Land
#3
It is destructive, but flashbench is designed for specifically benchmarking flash-based media (SD cards, USB flash drives) and discovering ideal block size, alignment, etc.

The source can be fetched from:

git://git.linaro.org/people/arnd/flashbench.git

there is a mailing list where people post results of testing.
 

The Following 3 Users Say Thank You to stlpaul For This Useful Post:
Estel's Avatar
Posts: 5,028 | Thanked: 8,613 times | Joined on Mar 2011
#4
what do You mean by "Destructive"? AFAIK, only one way to check real R/W speed (be it sequential or random) is to actively perform test, that of course lessen total write cycles on that media (like any write operation), but I wouldn't call it destructive. What is destructive, is - probably - flash killer or something like that (tool to discover total amount of write cycles by actually writing it until it dies).

It would be really great to have on-device tool for testing purposes

/Estel
__________________
N900's aluminum backcover / body replacement
-
N900's HDMI-Out
-
Camera cover MOD
-
Measure battery's real capacity on-device
-
TrueCrypt 7.1 | ereswap | bnf
-
Hardware's mods research is costly. To support my work, please consider donating. Thank You!
 
Posts: 167 | Thanked: 204 times | Joined on Jul 2010
#5
Originally Posted by Estel View Post
I totally agree with idea of establishing simple, yet standardized criteria for benchmarking (micro)SD cards - just like for batteries in battery thread. Especially, that manufacturer's data are mostly useless, and raw batch speed are irrelevant in case of running OS'es from card/using it as swap. BTW, such benchmarks could be useful not only for us - basically, same criteria apply to all things, where random access speed is much more important that batch, raw measurements.
Well, from a quick look at what's available, I'm fairly happy to use bonnie++ as the benchmarking tool; as well as measuring raw throughput for both per-char and block reads and writes, it measures random seeks within a large file (swap-like usage). It can also (optionally) run a bunch of tests for file create/stat/unlink using multiples of 1024 files of user-definable size. More info on that in the bonnie++ readme, but in short, we can use it to perform a set of benchmarks that should be useful whether your interest in a given card is for movies, operating system, swap or whatever. What I'm not sure is the precise parameters with which to test in order for the results to be widely useful.

So, I'm going to start by throwing up benchmarks of a couple of microSD cards that I've owned for years, taken on my N900 (kp50, 250-900MHz) using the command:

# time bonnie -r 256 -s 512 -n 8:65536:4096 -u root -d /media/mmc1/

and please, pick holes in my results (posted using ZeroBin as I can't readily format space-separated plain text on the forum)

Toshiba 16GB Class 2, 11/2009: http://sebsauvage.net/paste/?58bbcb2...SmlXA7LscVaEo=
Sandisk 16GB Class 2, 12/2008: http://sebsauvage.net/paste/?cd8a7fc...uPTGxUlKtpJAc=

Disclaimer: these first two tests are actually performed on 2 different N900s (though both are running exactly the same software and kernel). I might swap the cards over and see how reproducible the results are.

The results are not surprising; previous testing of the Sandisk card had suggested that it performed as a class 4 but not quite as a class 6, so a block write speed of 4720Kb/sec is believable. It was only marketed as a class 2 and it's been in active use for over 3 years, so I can't complain. The Toshiba card was known to be slower (it usually lives in a Nintendo DS) and the block write speed, one third that of the Sandisk, reflects how the card performs in real life. Fine for loading up once with stuff that doesn't change often, no good for dumping videos on though.

What I do find interesting is that the Toshiba card takes twice the wall clock time to complete the same test, being much slower on file creation whether sequential or random. It's such a difference that I also stopped to make sure my FAT filesystems are both using the same block sizes - which they are. This would seem to make it pretty useless for operating system use, again, not a surprise and not contradicting what I expected.

Feedback invited, also if anyone wants to run similar tests on their own SD cards by way of a sanity check...
 

The Following 3 Users Say Thank You to magick777 For This Useful Post:
Posts: 167 | Thanked: 204 times | Joined on Jul 2010
#6
Originally Posted by magick777 View Post
Disclaimer: these first two tests are actually performed on 2 different N900s (though both are running exactly the same software and kernel). I might swap the cards over and see how reproducible the results are.
Sandisk results: http://sebsauvage.net/paste/?aa5675a...4wIssSIy+5cO0=

This raises concerns as the same test completes almost 20% faster on my backup phone, yet with a lower throughput and higher CPU usage. I'm inclined to rule out a fundamentally lower CPU speed because the create/stat/unlink tests are doing proportionately more work in less time, so, perhaps there was competition for CPU cycles or I/O on the primary phone? Something ain't right here, variances of up to 20% on the same SD card don't give me confidence that I've properly controlled the test conditions. However, there shouldn't be much difference in the software on the two phones as the backup phone was cloned from the primary a few hours ago.

Fine, then, let's see if this performance difference between my two phones is reproducible...

Toshiba results: http://sebsauvage.net/paste/?b2e0b36...uUpZBIDHPZE98=

Right, so the (presumed 20% slower) primary phone tests the Toshiba card 25% faster and with some broadly similar results; in this case, though, it's the primary phone that's using more CPU on the file creation tests and doing more work in consequence. This would seem to suggest that there is no massive fundamental performance difference between the two phones, and the anomalies in my results on the Sandisk card must be down to not adequately controlling some other I/O or CPU load while testing. In other words, this testing is not much use without some kind of control for environmental factors.

My prime suspects are either modest or trackerd running in parallel, as I can't think of anything else that would be consuming I/O or CPU on both phones. Not sure whether to look at running bonnie with real-time priority, locked CPU speed or multiple test runs; this might yield better testing but is not representative of the conditions under which cards are actually used.

A preferred option - if enough users will submit data - is control by numbers, i.e. log results against a given card CSD and compare and contrast accordingly, optionally including the best and worst 10% of results and averaging the rest. However, that takes a long time and a lot of benchmarks before it yields any useful data, and I suspect that if the will were there, it would have already happened. Thoughts?
 

The Following User Says Thank You to magick777 For This Useful Post:
Estel's Avatar
Posts: 5,028 | Thanked: 8,613 times | Joined on Mar 2011
#7
Maybe just test every card, let's say, 5 times in row, and post min-average-max results?

BTW, it may be good idea to do it with wifi/modem/bluetooth etc disabled. Of course, latest kp etc.

/Estel
__________________
N900's aluminum backcover / body replacement
-
N900's HDMI-Out
-
Camera cover MOD
-
Measure battery's real capacity on-device
-
TrueCrypt 7.1 | ereswap | bnf
-
Hardware's mods research is costly. To support my work, please consider donating. Thank You!
 

The Following User Says Thank You to Estel For This Useful Post:
Reply


 
Forum Jump


All times are GMT. The time now is 08:17.