Testing server performances
These are more for the pro users but homelabbers like to test, so I included them. See the Web for more details of using these tools. #homelab
These are more for the pro users but homelabbers like to test, so I included them. Consult the Web for more details of using these tools. See also Phoronix Test Suite
This is for experienced users only
Storage Bench-marking
IOPS (input/output operations per second) is the number of input-output operations a data storage system performs per second (it may be a SAS-, SATA- or SSD disk, a RAID array or a LUN in an external storage device). In general, IOPS refers to the number of blocks that can be read from or written to a media.
Some disk manufacturers specify nominal IOPS values, but in fact these are not guaranteed at all. For high performance systems its crucial to understand the performance of your storage subsystem prior to starting a project, it is worth getting the maximum IOPS values your storage can handle.
Here we discuss using the FIO (Flexible I/O) Tool for Storage Bench-marking
Install FIO
apt update && apt install fio
Run FIO
Go to your disk to test cd /storage and run the tests. Then fio will read/write a 4KB block (a standard block size) with the 75/25% by the number of reads and writes operations and measure the performance.
Run a short test with this command - creates a run a 250M file
fio --randrepeat=1 --ioengine=libaio --direct=1 --gtod_reduce=1 --name=fiotest --filename=random_read_wright.fio --bs=4k --iodepth=64 --size=250M --readwrite=randrw --rwmixread=75
Or run this that takes a longer time, yes a long time - creates a run a 8G file
fio --randrepeat=1 --ioengine=libaio --direct=1
--gtod_reduce=1 --name=fiotest --filename=random_read_wright.fio --bs=4k --iodepth=64 --size=8G --readwrite=randrw --rwmixread=75
Sometimes we need just performance of the disks
random reads or random wrights
Random Read Operation Test
fio --randrepeat=1 --ioengine=libaio --direct=1 --gtod_reduce=1 --name=fiotest --filename=fiotest --bs=4k --iodepth=64 --size=2G --readwrite=randread
Random Wright Test
fio --randrepeat=1 --ioengine=libaio --direct=1 --gtod_reduce=1 --name=fiotest --filename=fiotest --bs=4k --iodepth=64 --size=2G --readwrite=randwrite
Random write test for IOP/s
sync;fio --randrepeat=1 --ioengine=libaio --direct=1 --name=test --filename=test --bs=4k --size=4G --readwrite=randwrite --ramp_time=4
Random Read test for IOP/s
sync;fio --randrepeat=1 --ioengine=libaio --direct=1 --name=test --filename=test --bs=4k --size=4G --readwrite=randread --ramp_time=4
Mixed Random Workload
sync;fio --randrepeat=1 --ioengine=libaio --direct=1 --name=test --filename=test --bs=4k --size=4G --readwrite=readwrite --ramp_time=4
Sequential write test for throughput
sync;fio --randrepeat=1 --ioengine=libaio --direct=1 --name=test --filename=test --bs=4M --size=4G --readwrite=write --ramp_time=4
Sequential Read test for throughput
sync;fio --randrepeat=1 --ioengine=libaio --direct=1 --name=test --filename=test --bs=4M --size=4G --readwrite=read --ramp_time=4
Using SYSSTAT for Server testing
Sysstat (link to github) is a application comprised of several tools of advanced system performance monitoring. It provides a measurable baseline of performance. It has broad array of statistics and will watch the following:
The sysstat package contains various utilities, common to many commercial Unixes, to monitor system performance and usage activity:
- iostat reports CPU statistics and input/output statistics for block devices and partitions.
- mpstat reports individual or combined processor related statistics.
- pidstat reports statistics for Linux tasks (processes) : I/O, CPU, memory, etc.
- tapestat reports statistics for tape drives connected to the system.
- cifsiostat reports CIFS statistics.
Sysstat also contains tools you can schedule via cron or systemd to collect and historic performance and activity data:
- sar collects, reports and saves system activity information (see below a list of metrics collected by sar).
- sadc is the system activity data collector, used as a back-end for sar.
- sa1 collects and stores binary data in the system activity daily data file. It is a front end to sadc designed to be run from cron or systemd.
- sa2 writes a summarized daily activity report. It is a front end to sar designed to be run from cron or systemd.
- sadf displays data collected by sar in multiple formats (CSV, XML, JSON, etc.) and can be used for data exchange with other programs. This command can also be used to draw graphs for the various activities collected by sar using SVG (Scalable Vector Graphics) format.
Default sampling interval is 10 minutes but this can be changed of course (it can be as small as 1 second).
Edit the config and enable status collection: ENABLE="true"
nano /etc/default/sysstat
Save the file and restart sysstat
service sysstat restart
First install the systat package
apt update && apt install sysstat
Using iostat of sysstat for testing
Flags for iostat
- iostat: Get report and statistic
- iostat -x: Show more details statistics information
- iostat -c: Show only the cpu statistic
- iostat -d: Display only the device report
- iostat -xd: Show extended I/O statistic for device only
- iostat -k: Capture the statistics in kilobytes or megabytes
- iostat -k 2 3: Display cpu and device statistics with delay
- iostat -j ID mmcbkl0 sda6 -x -m 2 2: Display persistent device name statistics
- iostat -p: Display statistics for block devices (iostat -p sda)
- iostat -N: Display lvm2 statistic information
Examples
iostat
iostat -p --human sda
iostat -xdm -d sda
For more info of the commands, an other good source is GeeksForGeeks
SAR of sysstat
#sar [ -P cpu statistics, -f location of log file under /var/log/sysstat/ ]
sar -P ALL -f /var/log/sysstat/sa20
VMSTAT of sysstat command
Syntax: vmstat [options][delay [count]]
Flags to use, see man vmstat for details
- - a active/inactive memory
- -f forks since boot
- -m display slab information
- -s memory statistics -S M for Megabytes
- -d all disk statistics
- -t shows a timestamp
man vmstat for details
Free: It specifies the amount of free memory/idle memory spaces which are not being used.
si: Memory that is swapped in every second from disk in kilobytes.
so: Memory that is swapped out every second to disk in kilobytes.
vmstat
vmstat -S M
Installing LSI MegaCli utility on Proxmox
Setting up and manage LSI HBA and RAID controllers and their disks.
Broadcoms pages have more info and a release document
- Install necessary tools
apt update && apt-get install unzip alien
- Install necessary lib
apt install libncurses5
- Download
wget https://docs.broadcom.com/docs-and-downloads/raid-controllers/raid-controllers-common-files/8-07-14_MegaCLI.zip
- Unzip
unzip 8-07-14_MegaCLI.zip
- Create debian package
cd Linux
andalien MegaCli-8.07.14-1.noarch.rpm
- Install debian package
dpkg -i megacli_8.07.14-2_all.deb
- Run MegaCli with -h for a menu
/opt/MegaRAID/MegaCli/MegaCli64 -h
Notes and references
Proxmox manual and wiki [1] - NFS storage [2] - ZFS over ISCSI [3] - ZFS High Availabuility [4]
RAID-5 issues [5] * Open ZFS Sys Admin [6] * SATA/SAS controllers [7] * Hardware RAID controllers [8] * SAS Serial Attached SCSI [9] * SATA Serial AT Attached [10] * Speeds and how to test SSD's [11] * [12] Phoronix Test Suit for pro testing.
Free and open-source [13].
Proxmox Admin guide, Storage See the Web page. More info can be found on the Wiki pages. ↩︎
In addition to a mirrored storage pool configuration, ZFS provides a RAID-Z configuration with either single-, double-, or triple-parity fault tolerance. Single-parity RAID-Z (raidz or raidz1) is similar to RAID-5. Double-parity RAID-Z (raidz2) is similar to RAID-6.
All traditional RAID-5-like algorithms (RAID-4, RAID-6, RDP, and EVEN-ODD, for example) might experience a problem known as the RAID-5 write hole. If only part of a RAID-5 stripe is written, and power is lost before all blocks have been written to disk, the parity will remain unsynchronized with the data, and therefore forever useless, (unless a subsequent full-stripe write overwrites it). In RAID-Z, ZFS uses variable-width RAID stripes so that all writes are full-stripe writes. This design is only possible because ZFS integrates file system and device management in such a way that the file system's metadata has enough information about the underlying data redundancy model to handle variable-width RAID stripes. RAID-Z is the world's first software-only solution to the RAID-5 write hole.
A RAID-Z configuration with N disks of size X with P parity disks can hold approximately (N-P)*X bytes and can withstand P device(s) failing before data integrity is compromised. You need at least two disks for a single-parity RAID-Z configuration and at least three disks for a double-parity RAID-Z configuration, and so on. For example, if you have three disks in a single-parity RAID-Z configuration, parity data occupies disk space equal to one of the three disks. Otherwise, no special hardware is required to create a RAID-Z configuration. ↩︎From 32 to 2 ports: Ideal SATA/SAS Controllers for ZFS & Linux MD RAID. See the Web page. ↩︎
Hardware and Open ZFS. See this Information page. ↩︎
Ceph: how to test if your SSD is suitable as a journal device?. See Web ↩︎
Free and open-source needs community support!
There are many reoccurring costs involved with maintaining free, open-source, and privacy respecting software. Expenses which volunteer developers pitch in to cover out-of-pocket.
These are just some fine example of apps developed when people feel about their software, as well as the importance of keeping them maintained.Your support is absolutely vital to keep them innovating and maintaining! ↩︎