IOR

Description
IOR (Interleaved or Random) is a commonly used file system benchmarking application particularly well-suited for evaluating the performance of parallel file systems. The software is most commonly distributed in source code form and normally needs to be compiled on the target platform.

IOR is not a Lustre-specific benchmark and can be run on any POSIX-compliant file system, but it does require a fully installed and configured file system implementation in order to run. For Lustre, this means the MGS, MDS and OSS services must be installed, configured and running, and that there is a population of Lustre client nodes running with the Lustre file system mounted.

Purpose
IOR can be used for testing performance of parallel file systems using various interfaces and access patterns. IOR uses MPI for process synchronisation – typically there are several IOR processes running in parallel across several nodes in an HPC cluster. As a user-space benchmarking application it is suitable for comparing the performance of different file systems. Typically one IOR process is run on each participating client node mounting the target file system but this is completely configurable.

Preparation
The  application is distributed as source code and must be compiled for use on the target environment. The software is hosted as a project on GitHub:

https://github.com/LLNL/ior

The remainder of this document will use OpenMPI for the examples. Integration with job schedulers is not discussed – examples will call the  command directly.

Download and Compile MDTest
To compile the  benchmark, run the following steps on a suitable machine:

 Install the pre-requisite development tools. On RHEL or CentOS systems, this can be accomplished by running the following command:  sudo yum -y install openmpi-devel git automake  Download the  source:  git clone https://github.com/LLNL/ior.git   Compile the software:  cd ior module load mpi/openmpi-x86_64 ./bootstrap ./configure [--with-lustre] make clean && make  Quickly verify that the program runs:  ./src/ior For example:  [bench@ct73-c1 ior]$ ./src/ior IOR-3.0.1: MPI Coordinated Test of Parallel I/O

Began: Wed Jun 28 23:37:00 2017 Command line used: ./src/ior Machine: Linux ct73-c1

Test 0 started: Wed Jun 28 23:37:00 2017 Summary: api               = POSIX test filename     = testFile access            = single-shared-file ordering in a file = sequential offsets ordering inter file= no tasks offsets clients           = 1 (1 per node) repetitions       = 1 xfersize          = 262144 bytes blocksize         = 1 MiB aggregate filesize = 1 MiB

access   bw(MiB/s)  block(KiB) xfer(KiB)  open(s)    wr/rd(s)   close(s)   total(s)   iter --   -  -- -              write     1072.96    1024.00    256.00     0.000022   0.000905   0.000005   0.000932   0 read     266.31     1024.00    256.00     0.000005   0.003745   0.000004   0.003755   0 remove   -          -          -          -          -          -          0.000280   0

Max Write: 1072.96 MiB/sec (1125.08 MB/sec) Max Read: 266.31 MiB/sec (279.25 MB/sec)

Summary of all tests: Operation  Max(MiB)   Min(MiB)  Mean(MiB)     StdDev    Mean(s) Test# #Tasks tPN reps fPP reord reordoff reordrand seed segcnt blksiz xsize aggsize API RefNum write       1072.96    1072.96    1072.96       0.00    0.00093 0 1 1 1 0 0 1 0 0 1 1048576 262144 1048576 POSIX 0 read         266.31     266.31     266.31       0.00    0.00375 0 1 1 1 0 0 1 0 0 1 1048576 262144 1048576 POSIX 0

Finished: Wed Jun 28 23:37:00 2017 Copy the  command onto all of the Lustre client nodes that will be used to run the benchmark. Alternatively, copy onto the Lustre file system itself so that the application is available on all of the nodes automatically. For example:  sudo mkdir -p /lustre/demo/bin sudo cp ./src/ior /lustre/demo/bin/.  </ol>

Note: There is currently a bug in some versions of the  library, notably version 1.3.0, that can cause a delay in starting MPI applications. When this occurs the following warning will appear in the command output:

<pre style="overflow-x:auto;"> hfi_wait_for_device: The /dev/hfi1_0 device failed to appear after 15.0 seconds: Connection timed out

This issue affects RHEL and CentOS 7.3, and is resolved in RHEL / CentOS 7.4+ and the upstream project. Details can be found here:

https://bugzilla.redhat.com/show_bug.cgi?id=1408316

Prepare the run-time environment
<ol> Create a user account from which to run the application, if a suitable account does not already exist. The account must be propagated across all of the Lustre client nodes that will participate in the benchmark, as well as the MDS servers for the file system. On the servers, it is recommended that the account is disabled in order to prevent users from logging into those machines. </li> Some MPI implementations rely upon passphrase-less SSH keys. This will enable the  command to launch processes on each of the client nodes that will run the benchmark. To create a key, login as the benchmark user to one of the nodes and run the  command, supplying an empty passphrase. For example:

<pre style="overflow-x:auto;"> [mjcowe@ct7-c1 ~]$ ssh-keygen -t rsa -N "" -f $HOME/.ssh/id_rsa Generating public/private rsa key pair. Created directory '/home/mjcowe/.ssh'. Your identification has been saved in /home/mjcowe/.ssh/id_rsa. Your public key has been saved in /home/mjcowe/.ssh/id_rsa.pub. The key fingerprint is: e4:b1:10:a2:7f:e8:b1:74:f3:c3:24:76:46:3d:4d:91 mjcowe@ct7-c1 The key's randomart image is: +--[ RSA 2048]+ +-+ </li> Copy the public key into the  file for the account.</li> If the user account is not hosted on a shared file system (e.g. a Lustre filesystem), then copy the public and private keys that were generated into the  directory of each of the Lustre client nodes that will be used in the benchmark. Normally, user accounts are hosted on a shared resource, making this step unnecessary. </li> Consider relaxing the  SSH option so that host entries are automatically added into   rather than prompting the user to confirm the connection. When running MPI programs across many nodes, this can save a good deal of inconvenience. If the account home directory is not on a shared storage, all nodes will need to be updated. <pre style="overflow-x:auto;"> cat >>$HOME/.ssh/config <<\__EOF Host * StrictHostKeyChecking no __EOF chmod 0600 $HOME/.ssh/config </li> Install the MPI runtime onto all Lustre client nodes: <pre style="overflow-x:auto;"> sudo yum -y install openmpi </li> Append the following lines to  (assuming BASH is the login shell) on the account running the benchmark: <pre style="overflow-x:auto"> module purge module load mpi/openmpi-x86_64 This ensures that the Open MPI library path and binary path are added to the user environment every time the user logs in (and every time  is invoked across multiple nodes). The  file is not read when   starts processes on remote nodes, which is why it is not chosen in this case. </li> </ol>
 * . .    oo   |
 * . . . . oE   |
 * .  . + o .    |
 * . . = o .    |
 * = * S       |
 * o * O        |
 * o  +        |

Setup
<ol> Login to one of the compute nodes as the benchmark user Create a host file for the  command, containing the list of Lustre clients that will be used for the benchmark. Each line in the file represents a machine and the number of slots (usually equal to the number of CPU cores). For example:

<pre style="overflow-x:auto"> for i in `seq -f "%03g" 1 32`; do echo "n"$i" slots=16" done > $HOME/hfile n001 slots=16 n002 slots=16 n003 slots=16 n004 slots=16 ... <ul> The first column of the host file contains the name of the nodes. This can also be an IP address if the  file or DNS is not set up. The second column is used to represent the number of CPU cores. </ul> </li> Run a quick test using  to launch the benchmark and verify that the environment is set up correctly. For example: <pre style="overflow-x:auto"> mpirun --hostfile $HOME/hfile --map-by node -np `cat $HOME/hfile|wc -l` hostname This should return the hostnames of all the machines that are in the test environment. The results are returned unsorted, in order of completion.
 * 1) Result:

Note: If the  does not work, and the output has only one or a very small number of unique hostnames repeated in the output, then set   for each host in the host file. Otherwise,  will fill up the slots on the first node before launching processes on subsequent nodes.

This may be desirable for multi-process tests but not for the single task per client test. Do not set the slot count higher than the number of cores present. If over-subscription is required, set the -np flag to greater than the number of physical cores. This informs OpenMPI that the task will be oversubscribed and will run in a mode that yields the processor to peers.

Refer to: OpenMPI FAQ -- Oversubscribing Nodes, and also the notes on OpenMPI at the end of this document. </li> </ol>

Example: IOR Read / Write Test, Single File, Multiple Clients
The following annotated script demonstrates how to configure an IOR benchmark for a single, shared file, test:

<pre style="overflow-x:auto"> module purge module load mpi/openmpi-x86_64 IOREXE="/lustre/demo/bin/ior"
 * 1) !/bin/bash

NCT=`grep -v ^# hfile |wc -l` DS=`date +"%F_%H:%M:%S"` SEQ=1 MAXPROCS=8 DATA_SIZE=8 BASE_DIR=/lustre/demo/iorbench mkdir -p ${BASE_DIR} while [ ${SEQ} -le ${MAXPROCS} ]; do NPROC=`expr ${NCT} \* ${SEQ}` BSZ=`expr ${DATA_SIZE} / ${SEQ}`"g" mpirun -np ${NPROC} --map-by node -hostfile ./hfile \ ${IOREXE} -v -w -r -i 4 \ -o ${BASE_DIR}/ior-test.file \ -t 1m -b ${BSZ} \ -O "lustreStripeCount=-1" | tee ${HOME}/IOR-RW-Single_File-c_${NCT}-s_${SEQ}_${DS} SEQ=`expr ${SEQ} \* 2` done
 * 1) Node count -- not very accurate
 * 1) Date Stamp for benchmark
 * 1) IOR will be run in a loop, doubling the number of processes per client node
 * 2) with every iteration from $SEQ -> $MAXPROCS. If SEQ=1 and MAXPROCS=8, then the
 * 3) iterations will be 1, 2, 4, 8 processes per node.
 * 4) SEQ and MAXPROCS should be a power of 2 (including 2^0).
 * 1) Overall data set size in GiB. Must be >=MAXPROCS. Should be a power of 2.
 * 1) Pick a reasonable block size, bearing in mind the size of the target file system.
 * 2) Bear in mind that the overall data size will be block size * number of processes.
 * 3) Block size must be a multiple of transfer size (-t option in command line).
 * 1) Alternatively, set to a static value and let the data size increase.
 * 2) BSZ="1g"
 * 3) BSZ="${DATA_SIZE}"

Example: IOR Read/Write Test, Multiple Files per Process, Multiple Clients
This script is similar to the previous example, but this time the  flag is used, informing IOR to create a unique file per process. Additionally, the Lustre stripe count is set to 1.

<pre style="overflow-x:auto"> module purge module load mpi/openmpi-x86_64
 * 1) !/bin/bash

IOREXE="/lustre/demo/bin/ior"

NCT=`grep -v ^# hfile |wc -l` DS=`date +"%F_%H:%M:%S"` SEQ=1 MAXPROCS=8 DATA_SIZE=8 BASE_DIR=/lustre/demo/iorbench mkdir -p ${BASE_DIR}

while [ ${SEQ} -le ${PROCS} ]; do NPROC=`expr ${NCT} \* ${SEQ}` BSZ=`expr ${DATA_SIZE} / ${SEQ}`"g" mpirun -np ${NPROC} --map-by node -hostfile ./hfile \ ${IOREXE} -v -w -r -i 4 -F \ -o ${BASE_DIR}/test/ior-test.file \ -t 1m -b ${BSZ} \ -O "lustreStripeCount=1" | tee ${HOME}/IOR-RW-Multiple_Files-Common_Dir-c_${NCT}-s_${SEQ}_${DS} SEQ=`expr ${SEQ} \* 2` done
 * 1) BSZ="1g"
 * 2) BSZ="${DATA_SIZE}"

Optionally, add the  flag to create a unique directory for each file created. The full file name paths for each process can also be specified by supplying a list of files delimited by  to the   flag. This can be useful for DNE testing.

Notes on OpenMPI
When preparing the benchmark, pay careful attention to the distribution of processes across the nodes. will, by default, fill the slots of one node before allocating processes to the next node in the list. i.e. all of the slots on the first node in the file will be consumed before allocating processes to the second node, then third node, and so on. If the number of slots requested is lower than the overall number of slots in the host file, then utlisation will not be evenly distributed, and some nodes may not be used at all.

If the number of process is larger than the number of available slots,  will oversubscribe one or more nodes until all the processes have been launched. This can be exploited to create more even distribution of processes across nodes by setting the number of slots per host to 1. However, note that  will decide where the additional processes will run, which can lead to performance variance from run to run of a job.

The  option distributes processes evenly across the nodes, and does not try to consume all of the slots from one node before allocating processes to the next node in the list. For example, if there are 4 nodes, each with 16 slots (64 slots total), and a job is submitted that requires only 24 slots, then each node will be allocated 6 processes.

Experiment with the options by using the  command as the target application. For example: <pre style="overflow-x:auto"> [mduser@ct7-c1 ~]$ cat $HOME/hfile ct7-c1 slots=16 ct7-c2 slots=16 ct7-c3 slots=16 ct7-c4 slots=16

[mduser@ct7-c1 ~]$ mpirun --hostfile $HOME/hfile -np `cat $HOME/hfile|wc -l` hostname ct7-c1 ct7-c1 ct7-c1 ct7-c1
 * 1) By default, mpirun will fill slots on one node before allocating slots from the next:

[mduser@ct7-c1 ~]$ mpirun --hostfile $HOME/hfile --map-by node -np `cat $HOME/hfile|wc -l` hostname ct7-c2 ct7-c1 ct7-c3 ct7-c4
 * 1) The --map-by node option distributes the processes evenly:

The  parameter is the total number of threads. If the host file has 16 nodes but the value of is 1, then only one thread on one node is being used to complete the operations.

The  man page provides a comprehensive description of the available options.

See also the OpenMPI FAQ, and the section on oversubscription.