IOR
Description
IOR (Interleaved or Random) is a commonly used file system benchmarking application particularly well-suited for evaluating the performance of parallel file systems. The software is most commonly distributed in source code form and normally needs to be compiled on the target platform.
IOR is not a Lustre-specific benchmark and can be run on any POSIX-compliant file system, but it does require a fully installed and configured file system implementation in order to run. For Lustre, this means the MGS, MDS and OSS services must be installed, configured and running, and that there is a population of Lustre client nodes running with the Lustre file system mounted.
Purpose
IOR can be used for testing performance of parallel file systems using various interfaces and access patterns. IOR uses MPI for process synchronisation – typically there are several IOR processes running in parallel across several nodes in an HPC cluster. As a user-space benchmarking application it is suitable for comparing the performance of different file systems. Typically one IOR process is run on each participating client node mounting the target file system but this is completely configurable.
Preparation
The ior
application is distributed as source code and must be compiled for use on the target environment. The software is hosted as a project on GitHub:
The remainder of this document will use OpenMPI for the examples. Integration with job schedulers is not discussed – examples will call the mpirun
command directly.
Download and Compile IOR
To compile the ior
benchmark, run the following steps on a suitable machine:
- Install the pre-requisite development tools. On RHEL or CentOS systems, this can be accomplished by running the following command:
sudo yum -y install openmpi-devel git automake
- Download the
IOR
source:git clone https://github.com/hpc/ior
- Compile the software:
cd ior module load mpi/openmpi-x86_64 ./bootstrap ./configure [--with-lustre] make clean && make
- Quickly verify that the program runs:
./src/ior
For example:
[bench@ct73-c1 ior]$ ./src/ior IOR-3.0.1: MPI Coordinated Test of Parallel I/O Began: Wed Jun 28 23:37:00 2017 Command line used: ./src/ior Machine: Linux ct73-c1 Test 0 started: Wed Jun 28 23:37:00 2017 Summary: api = POSIX test filename = testFile access = single-shared-file ordering in a file = sequential offsets ordering inter file= no tasks offsets clients = 1 (1 per node) repetitions = 1 xfersize = 262144 bytes blocksize = 1 MiB aggregate filesize = 1 MiB access bw(MiB/s) block(KiB) xfer(KiB) open(s) wr/rd(s) close(s) total(s) iter ------ --------- ---------- --------- -------- -------- -------- -------- ---- write 1072.96 1024.00 256.00 0.000022 0.000905 0.000005 0.000932 0 read 266.31 1024.00 256.00 0.000005 0.003745 0.000004 0.003755 0 remove - - - - - - 0.000280 0 Max Write: 1072.96 MiB/sec (1125.08 MB/sec) Max Read: 266.31 MiB/sec (279.25 MB/sec) Summary of all tests: Operation Max(MiB) Min(MiB) Mean(MiB) StdDev Mean(s) Test# #Tasks tPN reps fPP reord reordoff reordrand seed segcnt blksiz xsize aggsize API RefNum write 1072.96 1072.96 1072.96 0.00 0.00093 0 1 1 1 0 0 1 0 0 1 1048576 262144 1048576 POSIX 0 read 266.31 266.31 266.31 0.00 0.00375 0 1 1 1 0 0 1 0 0 1 1048576 262144 1048576 POSIX 0 Finished: Wed Jun 28 23:37:00 2017
- Copy the
ior
command onto all of the Lustre client nodes that will be used to run the benchmark. Alternatively, copy onto the Lustre file system itself so that the application is available on all of the nodes automatically. For example:sudo mkdir -p /lustre/demo/bin sudo cp ./src/ior /lustre/demo/bin/.
Note: There is currently a bug in some versions of the libfabric
library, notably version 1.3.0, that can cause a delay in starting MPI applications. When this occurs the following warning will appear in the command output:
hfi_wait_for_device: The /dev/hfi1_0 device failed to appear after 15.0 seconds: Connection timed out
This issue affects RHEL and CentOS 7.3, and is resolved in RHEL / CentOS 7.4+ and the upstream project. Details can be found here:
https://bugzilla.redhat.com/show_bug.cgi?id=1408316
Prepare the run-time environment
- Create a user account from which to run the application, if a suitable account does not already exist. The account must be propagated across all of the Lustre client nodes that will participate in the benchmark, as well as the MDS servers for the file system. On the servers, it is recommended that the account is disabled in order to prevent users from logging into those machines.
- Some MPI implementations rely upon passphrase-less SSH keys. This will enable the
mpirun
command to launch processes on each of the client nodes that will run the benchmark. To create a key, login as the benchmark user to one of the nodes and run thessh-keygen
command, supplying an empty passphrase. For example:[mjcowe@ct7-c1 ~]$ ssh-keygen -t rsa -N "" -f $HOME/.ssh/id_rsa Generating public/private rsa key pair. Created directory '/home/mjcowe/.ssh'. Your identification has been saved in /home/mjcowe/.ssh/id_rsa. Your public key has been saved in /home/mjcowe/.ssh/id_rsa.pub. The key fingerprint is: e4:b1:10:a2:7f:e8:b1:74:f3:c3:24:76:46:3d:4d:91 mjcowe@ct7-c1 The key's randomart image is: +--[ RSA 2048]----+ | . . oo | | . . . . oE | | . . + o . | | . . = o . | | = * S | | o * O | | o + | | . | | | +-----------------+
- Copy the public key into the
$HOME/.ssh/authorized_keys
file for the account. - If the user account is not hosted on a shared file system (e.g. a Lustre filesystem), then copy the public and private keys that were generated into the
$HOME/.ssh
directory of each of the Lustre client nodes that will be used in the benchmark. Normally, user accounts are hosted on a shared resource, making this step unnecessary. - Consider relaxing the
StrictHostKeyChecking
SSH option so that host entries are automatically added into$HOME/.ssh/known_hosts
rather than prompting the user to confirm the connection. When running MPI programs across many nodes, this can save a good deal of inconvenience. If the account home directory is not on a shared storage, all nodes will need to be updated.cat >>$HOME/.ssh/config <<\__EOF Host * StrictHostKeyChecking no __EOF chmod 0600 $HOME/.ssh/config
- Install the MPI runtime onto all Lustre client nodes:
sudo yum -y install openmpi
- Append the following lines to
$HOME/.bashrc
(assuming BASH is the login shell) on the account running the benchmark:module purge module load mpi/openmpi-x86_64
This ensures that the Open MPI library path and binary path are added to the user environment every time the user logs in (and every time
mpirun
is invoked across multiple nodes). The.bash_profile
file is not read whenmpirun
starts processes on remote nodes, which is why it is not chosen in this case.
Benchmark Execution
Setup
- Login to one of the compute nodes as the benchmark user
- Create a host file for the
mpirun
command, containing the list of Lustre clients that will be used for the benchmark. Each line in the file represents a machine and the number of slots (usually equal to the number of CPU cores). For example:for i in `seq -f "%03g" 1 32`; do echo "n"$i" slots=16" done > $HOME/hfile # Result: n001 slots=16 n002 slots=16 n003 slots=16 n004 slots=16 ...
- The first column of the host file contains the name of the nodes. This can also be an IP address if the
/etc/hosts
file or DNS is not set up. - The second column is used to represent the number of CPU cores.
- The first column of the host file contains the name of the nodes. This can also be an IP address if the
- Run a quick test using
mpirun
to launch the benchmark and verify that the environment is set up correctly. For example:mpirun --hostfile $HOME/hfile --map-by node -np `cat $HOME/hfile|wc -l` hostname
This should return the hostnames of all the machines that are in the test environment. The results are returned unsorted, in order of completion.
Note: If the
--map-by node
does not work, and the output has only one or a very small number of unique hostnames repeated in the output, then setslots=1
for each host in the host file. Otherwise,mpirun
will fill up the slots on the first node before launching processes on subsequent nodes.This may be desirable for multi-process tests but not for the single task per client test. Do not set the slot count higher than the number of cores present. If over-subscription is required, set the -np flag to greater than the number of physical cores. This informs OpenMPI that the task will be oversubscribed and will run in a mode that yields the processor to peers.
Refer to: OpenMPI FAQ -- Oversubscribing Nodes, and also the notes on OpenMPI at the end of this document.
Example: IOR Read / Write Test, Single File, Multiple Clients
The following annotated script demonstrates how to configure an IOR benchmark for a single, shared file, test:
#!/bin/bash module purge module load mpi/openmpi-x86_64 IOREXE="/lustre/demo/bin/ior" # Node count -- not very accurate NCT=`grep -v ^# hfile |wc -l` # Date Stamp for benchmark DS=`date +"%F_%H:%M:%S"` # IOR will be run in a loop, doubling the number of processes per client node # with every iteration from $SEQ -> $MAXPROCS. If SEQ=1 and MAXPROCS=8, then the # iterations will be 1, 2, 4, 8 processes per node. # SEQ and MAXPROCS should be a power of 2 (including 2^0). SEQ=1 MAXPROCS=8 # Overall data set size in GiB. Must be >=MAXPROCS. Should be a power of 2. DATA_SIZE=8 BASE_DIR=/lustre/demo/iorbench mkdir -p ${BASE_DIR} while [ ${SEQ} -le ${MAXPROCS} ]; do NPROC=`expr ${NCT} \* ${SEQ}` # Pick a reasonable block size, bearing in mind the size of the target file system. # Bear in mind that the overall data size will be block size * number of processes. # Block size must be a multiple of transfer size (-t option in command line). BSZ=`expr ${DATA_SIZE} / ${SEQ}`"g" # Alternatively, set to a static value and let the data size increase. # BSZ="1g" # BSZ="${DATA_SIZE}" mpirun -np ${NPROC} --map-by node -hostfile ./hfile \ ${IOREXE} -v -w -r -i 4 \ -o ${BASE_DIR}/ior-test.file \ -t 1m -b ${BSZ} \ -O "lustreStripeCount=-1" | tee ${HOME}/IOR-RW-Single_File-c_${NCT}-s_${SEQ}_${DS} SEQ=`expr ${SEQ} \* 2` done
Example: IOR Read/Write Test, Multiple Files per Process, Multiple Clients
This script is similar to the previous example, but this time the -F
flag is used, informing IOR to create a unique file per process. Additionally, the Lustre stripe count is set to 1.
#!/bin/bash module purge module load mpi/openmpi-x86_64 IOREXE="/lustre/demo/bin/ior" NCT=`grep -v ^# hfile |wc -l` DS=`date +"%F_%H:%M:%S"` SEQ=1 MAXPROCS=8 DATA_SIZE=8 BASE_DIR=/lustre/demo/iorbench mkdir -p ${BASE_DIR} while [ ${SEQ} -le ${PROCS} ]; do NPROC=`expr ${NCT} \* ${SEQ}` BSZ=`expr ${DATA_SIZE} / ${SEQ}`"g" # BSZ="1g" # BSZ="${DATA_SIZE}" mpirun -np ${NPROC} --map-by node -hostfile ./hfile \ ${IOREXE} -v -w -r -i 4 -F \ -o ${BASE_DIR}/test/ior-test.file \ -t 1m -b ${BSZ} \ -O "lustreStripeCount=1" | tee ${HOME}/IOR-RW-Multiple_Files-Common_Dir-c_${NCT}-s_${SEQ}_${DS} SEQ=`expr ${SEQ} \* 2` done
Optionally, add the -u
flag to create a unique directory for each file created. The full file name paths for each process can also be specified by supplying a list of files delimited by @
to the -o
flag. This can be useful for DNE testing.
Commonly Used Options
Option | Description |
---|---|
-w
|
Write file |
-r
|
Read existing file – when combined with -w , the write text executes first to create the file for the read test to use.
|
-o <file> [[@<file>] [@file] ...]
|
The file [list] to use in the test. For multi-file (file-per-process) tests, the file name is a template for each file that will be created (file path will be appended with a unique number). When combined with the unique directory name option, each directory is numbered and the files created one per numbered directory. For example:
-F -o /lustre/demo/test.dat → /lustre/demo/test.dat.{seq} -F -u -o /lustre/scratch/test.dat → /lustre/scratch/{index}/test.dat.{seq} -F -u -o /lustre/demo/test.dat0@/lustre/demo/test.dat1@/lustre/demo/test.dat2 |
-O "<directive>"
|
Comma-separated list of IOR directives to set various parameters. There are several Lustre-specific directives, for example:
|
-t <int>
|
Size of data transfer in bytes (e.g. 8, 4k, 2m, 1g). This is the equivalent of the RPC transaction size and should normally be set to 1m for Lustre.
|
-b <int>
|
Size of data block in bytes (e.g. 8, 4k, 2m, 1g). This is the size of the block of data that each process will write and must be a multiple of the transfer size. Set it to a large number. For single file tests, multiply the block size by the number of tasks to get the file size. For multiple file tasks, each file will be block size bytes. File system must have block size * nprocs free space.
|
-i <int>
|
The number of iterations to run. |
-v[v[v] ...]
|
Increase verbosity of output. Add more -v flags to increase the level of detail.
|
-u
|
When each process creates a separate file, use a unique directory name for each file-per-process |
-C
|
Re-order tasks: change the task ordering to n+1 ordering for read-back. May avoid the read cache effects on clients
|
-F
|
Create a separate file in each process (often referred to as file-per-process) |
-m
|
Multi-file option: use the number of iterations (the -i flag) for the count of the number of files
|
-k
|
Do not remove test file(s) on program exit |
Notes on OpenMPI
When preparing the benchmark, pay careful attention to the distribution of processes across the nodes. mpirun
will, by default, fill the slots of one node before allocating processes to the next node in the list. i.e. all of the slots on the first node in the file will be consumed before allocating processes to the second node, then third node, and so on. If the number of slots requested is lower than the overall number of slots in the host file, then utlisation will not be evenly distributed, and some nodes may not be used at all.
If the number of process is larger than the number of available slots, mpirun
will oversubscribe one or more nodes until all the processes have been launched. This can be exploited to create more even distribution of processes across nodes by setting the number of slots per host to 1. However, note that mpirun
will decide where the additional processes will run, which can lead to performance variance from run to run of a job.
The --map-by node
option distributes processes evenly across the nodes, and does not try to consume all of the slots from one node before allocating processes to the next node in the list. For example, if there are 4 nodes, each with 16 slots (64 slots total), and a job is submitted that requires only 24 slots, then each node will be allocated 6 processes.
Experiment with the options by using the hostname
command as the target application. For example:
[mduser@ct7-c1 ~]$ cat $HOME/hfile ct7-c1 slots=16 ct7-c2 slots=16 ct7-c3 slots=16 ct7-c4 slots=16 # By default, mpirun will fill slots on one node before allocating slots from the next: [mduser@ct7-c1 ~]$ mpirun --hostfile $HOME/hfile -np `cat $HOME/hfile|wc -l` hostname ct7-c1 ct7-c1 ct7-c1 ct7-c1 # The --map-by node option distributes the processes evenly: [mduser@ct7-c1 ~]$ mpirun --hostfile $HOME/hfile --map-by node -np `cat $HOME/hfile|wc -l` hostname ct7-c2 ct7-c1 ct7-c3 ct7-c4
The -np
parameter is the total number of threads. If the host file has 16 nodes but the value of -np
is 1, then only one thread on one node is being used to complete the operations.
The mpirun(1)
man page provides a comprehensive description of the available options.
See also the OpenMPI FAQ, and the section on oversubscription.