C H A P T E R  25

Lustre Tuning

This chapter contains information about tuning Lustre for better performance and includes the following sections:

Note - Many options in Lustre are set by means of kernel module parameters. These parameters are contained in the modprobe.conf file.

25.1 Optimizing the Number of Service Threads

An OSS can have a minimum of 2 service threads and a maximum of 512 service threads. The number of service threads is a function of how much RAM and how many CPUs are on each OSS node (1 thread / 128MB * num_cpus). If the load on the OSS node is high, new service threads will be started in order to process more requests concurrently, up to 4x the initial number of threads (subject to the maximum of 512). For a 2GB 2-CPU system, the default thread count is 32 and the maximum thread count is 128.

Increasing the size of the thread pool may help when:

Decreasing the size of the thread pool may help if:

Increasing the number of I/O threads allows the kernel and storage to aggregate many writes together for more efficient disk I/O. The OSS thread pool is shared--each thread allocates approximately 1.5 MB (maximum RPC size + 0.5 MB) for internal I/O buffers.

It is very important to consider memory consumption when increasing the thread pool size. Drives are only able to sustain a certain amount of parallel I/O activity before performance is degraded, due to the high number of seeks and the OST threads just waiting for I/O. In this situation, it may be advisable to decrease the load by decreasing the number of OST threads.

Determining the optimum number of OST threads is a process of trial and error, and varies for each particular configuration. Variables include the number of OSTs on each OSS, number and speed of disks, RAID configuration, and available RAM. You may want to start with a number of OST threads equal to the number of actual disk spindles on the node. If you use RAID, subtract any dead spindles not used for actual data (e.g., 1 of N of spindles for RAID5, 2 of N spindles for RAID6), and monitor the performance of clients during usual workloads. If performance is degraded, increase the thread count and see how that works until performance is degraded again or you reach satisfactory performance.

Note - If there are too many threads, the latency for individual I/O requests can become very high and should be avoided. Set the desired maximum thread count permanently using the method described above.

25.1.1 Specifying the OSS Service Thread Count

The oss_num_threads parameter enables the number of OST service threads to be specified at module load time on the OSS nodes:

options ost oss_num_threads={N}

After startup, the minimum and maximum number of OSS thread counts can be set via the {service}.thread_{min,max,started} tunable. To change the tunable at runtime, run:

lctl {get,set}_param {service}.thread_{min,max,started}

For details, see Setting MDS and OSS Thread Counts.

25.1.2 Specifying the MDS Service Thread Count

The mds_num_threads parameter enables the number of MDS service threads to be specified at module load time on the MDS node:

options mds mds_num_threads={N}

After startup, the minimum and maximum number of MDS thread counts can be set via the {service}.thread_{min,max,started} tunable. To change the tunable at runtime, run:

lctl {get,set}_param {service}.thread_{min,max,started}

For details, see Setting MDS and OSS Thread Counts.

At this time, no testing has been done to determine the optimal number of MDS threads. The default value varies, based on server size, up to a maximum of 32. The maximum number of threads (MDS_MAX_THREADS) is 512.

Note - The OSS and MDS automatically start new service threads dynamically, in response to server load within a factor of 4. The default value is calculated the same way as before. Setting the _mu_threads module parameter disables automatic thread creation behavior.

25.2 Tuning LNET Parameters

This section describes LNET tunables. that may be necessary on some systems to improve performance. To test the performance of your Lustre network, see Chapter 23: Testing Lustre Network Performance (LNET Self-Test).

25.2.1 Transmit and Receive Buffer Size

The kernel allocates buffers for sending and receiving messages on a network.

ksocklnd has separate parameters for the transmit and receive buffers.

options ksocklnd tx_buffer_size=0 rx_buffer_size=0

If these parameters are left at the default value (0), the system automatically tunes the transmit and receive buffer size. In almost every case, this default produces the best performance. Do not attempt to tune these parameters unless you are a network expert.

25.2.2 Hardware Interrupts (enable_irq_affinity)

The hardware interrupts that are generated by network adapters may be handled by any CPU in the system. In some cases, we would like network traffic to remain local to a single CPU to help keep the processor cache warm and minimize the impact of context switches. This is helpful when an SMP system has more than one network interface and ideal when the number of interfaces equals the number of CPUs. To enable the enable_irq_affinity parameter, enter:

options ksocklnd enable_irq_affinity=1

In other cases, if you have an SMP platform with a single fast interface such as 10Gb Ethernet and more than two CPUs, you may see performance improve by turning this parameter off.

options ksocklnd enable_irq_affinity=0

By default, this parameter is off. As always, you should test the performance to compare the impact of changing this parameter.

25.3 Lockless I/O Tunables

The lockless I/O tunable feature allows servers to ask clients to do lockless I/O (liblustre-style where the server does the locking) on contended files.

The lockless I/O patch introduces these tunables:


contended_locks - If the number of lock conflicts in the scan of granted and waiting queues at contended_locks is exceeded, the resource is considered to be contended.

contention_seconds - The resource keeps itself in a contended state as set in the parameter.

max_nolock_bytes - Server-side locking set only for requests less than the blocks set in the max_nolock_bytes parameter. If this tunable is set to zero (0), it disables server-side locking for read/write requests.


contention_seconds - llite inode remembers its contended state for the time specified in this parameter.

The /proc/fs/lustre/llite/lustre-*/stats file has new rows for lockless I/O statistics.

lockless_read_bytes and lockless_write_bytes - To count the total bytes read or written, the client makes its own decisions based on the request size. The client does not communicate with the server if the request size is smaller than the min_nolock_size, without acquiring locks by the client.

25.4 Improving Lustre Performance When Working with Small Files

A Lustre environment where an application writes small file chunks from many clients to a single file will result in bad I/O performance. To improve Lustre’s performance with small files:

25.5 Understanding Why Write Performance Is Better Than Read Performance

Typically, the performance of write operations on a Lustre cluster is better than read operations. When doing writes, all clients are sending write RPCs asynchronously. The RPCs are allocated, and written to disk in the order they arrive. In many cases, this allows the back-end storage to aggregate writes efficiently.

In the case of read operations, the reads from clients may come in a different order and need a lot of seeking to get read from the disk. This noticeably hampers the read throughput.

Currently, there is no readahead on the OSTs themselves, though the clients do readahead. If there are lots of clients doing reads it would not be possible to do any readahead in any case because of memory consumption (consider that even a single RPC (1 MB) readahead for 1000 clients would consume 1 GB of RAM).

For file systems that use socklnd (TCP, Ethernet) as interconnect, there is also additional CPU overhead because the client cannot receive data without copying it from the network buffers. In the write case, the client CAN send data without the additional data copy. This means that the client is more likely to become CPU-bound during reads than writes.