Benchmarks
-- See also: BogoMips
Disk
bonnie++
Run bonnie++ with 10
concurrent tests in /mnt/test
, with 10
x 2
GB file, while limiting RAM size to 1
GB, over 10
runs[1]:
bonnie++ -d /mnt/test -c 10 -s 2048 -n 10 -m foobar -r 1024 -x 10 -u root | tee bonnie_foobar.csv bon_csv2html < bonnie_foobar.csv > bonnie_foobar.html
Note: the file size should be twice as large as the available memory size.
dbench
Run dbench with EA support, for 5
minutes, with 10
concurrent clients:
dbench -x -t 300 -D /mnt/test 10
iozone
iozone runs in the current directory:
cd /mnt/test iozone -a -g 2048m -e -c -+u -b ~/iozone.xls
Runs in auto
mode, with 2G
files, include flush
and close
times, record CPU Utilization
and generate an Excel
report too.[2] Again, file size should be at least twice as large as the available memory.
tiobench
tiobench runs in the current directory:
cd /mnt/test tiobench --identifier foobar --progress --size 2048 --numruns 10 --threads 4
Have it run with 2G
files, 10
times each with 4
threads. Note that tiobench
is a wrapper for tiotest
, which has even more options to tweak. The above would translate to:
tiotest -t 4 -f 512 -r 1000 -b 4096 -d . -T | | | | | | | | | | | |terse output | | | | | | | | | |directory | | | | | | | |blocksize, default | | | | | |iops/thread, default | | | |tiobench --size divided by threads | |tiobench --threads
CPU
unixbench
Build the BYTE UNIX benchmark suite:
git clone https://github.com/kdlucas/byte-unixbench.git byte-unixbench-git cd $_/UnixBench make all
Run with with verbose
output, execute each test for 10
seconds, run 4
copies of each test in parallel:
$ ./Run -v -i 3 -c 4 [...] # # # # # # # ##### ###### # # #### # # # # ## # # # # # # # ## # # # # # # # # # # # ## ##### ##### # # # # ###### # # # # # # ## # # # # # # # # # # # # ## # # # # # # # ## # # # # #### # # # # # ##### ###### # # #### # # Version 5.1.3 Based on the Byte Magazine Unix Benchmark Multi-CPU version Version 5 revisions by Ian Smith, Sunnyvale, CA, USA January 13, 2011 johantheghost at yahoo period com 1 x Dhrystone 2 using register variables ...
This will take a while to complete ☕
Memory
Generic
While there are more sophisticated benchmark suites for that, a quick and dirty test would be:
$ pv -Ss 10g < /dev/zero > /dev/null
10GiB 0:00:06 [1.57GiB/s] [===================================>] 100%
stress-ng
Both a CPU and memory test, stress-ng carries quite a few tests[3] to stress a system.
Excercise $CPUCOUNT
matrix operations over 60
seconds and provide some performance
stats too:
$ stress-ng --matrix 0 --timeout 60s --times --perf stress-ng: info: [8404] dispatching hogs: 2 matrix stress-ng: info: [8404] successful run completed in 60.02s stress-ng: info: [8705] matrix: stress-ng: info: [8705] 106 Page Faults Minor 10.18 /sec stress-ng: info: [8705] 0 Page Faults Major 0.00 /sec stress-ng: info: [8705] 1,282 Context Switches 123.15 /sec [...] stress-ng: info: [8705] for a 60.02s run time: stress-ng: info: [8705] 120.82s available CPU time stress-ng: info: [8705] 59.95s user time ( 47.79%) stress-ng: info: [8705] 0.02s system time ( 0.10%) stress-ng: info: [8705] 59.97s total time ( 47.89%) stress-ng: info: [8705] load average: 2.47 4.95 5.36
Simulate a memory hog:<ref>How to fill 90% of the free memory?<ref>
stress-ng --vm-bytes $(awk '/MemAvailable/{printf "%d\n", $2 * 0.9;}' /proc/meminfo)k --vm 1 --vm-keep
Or, with absolute values:
stress-ng --vm-bytes 3G --vm 1 --vm-keep
Without stress-ng
:
head -c 5g /dev/zero | tail head -c 5g /dev/zero | pv -L 10m | tail # Same, but more slowly
Network
iperf
iperf3 needs a server to be started in order for the client to run the test:
$ iperf -fM -s Server listening on 5201
On the client, run:
iperf3 -fM -t 10 -P 4 -c server.example.net
This will report the used bandwidth in MB/s
, run for 10
seconds with 4
clients in parallel, connecting to the server server.example.net
(on port 5201).
Links
- Apache/Benchmarks
- MySQL/Benchmark
- PostgreSQL/Benchmark
- Qemu/openssl benchmarks
- OpenBenchmarking.org
- Linux Benchmark Suite Homepage
- Linux-Bench
- Comparing Filesystem Performance in Virtual Machines (January 2014)