I needed (still need to, gah) collect data on how a particular change to the Linux kernel MMC block layer ultimately affects write performance in SQLite. Android tends to use SQLite databases quite a bit, so a simple test on how long it takes to insert some large number of records is probably fair.
I exposed a couple of tunables through debugfs and started collecting data.
Of course, the first set of data was collected using specific synthetic benchmarks that did various kinds of O_SYNC+O_DIRECT I/O to the MMC, but it's nice to be able to see the real-life effect of a particular change, especially in combination with file systems. The problem is that you can get data, but that data is not repeatable within the same configuration. That makes comparing data across different configurations tedious at best.
Some things are pretty obvious - ensuring all mounted file systems are read-only. Ensuring there are no other processes running, etc. Other items don't come immediately to mind (to me at least :-()...
For example, running the same 5000 insertions x 50 times about 10 times on the same file system yields progressively slower data.
Current steps to ensure more-or-less repeatable data gathering:
0) Turn off CPU frequency scaling, RAM frequency scaling, etc, all
power savings off.
1) Unmount partition containing files on which the SQLite test operates.
2) Perform BLKDISCARD over the partition.
3) Format with desired file system.
6) Sync && echo 3 > /proc/sys/vm/drop_caches
7) Perform test.
9) Repeat from (1)
Anything else I could have missed?