Installing the Lustre Software: Difference between revisions

From Lustre Wiki
Jump to navigation Jump to search
mNo edit summary
No edit summary
(6 intermediate revisions by 3 users not shown)
Line 5: Line 5:
For more comprehensive coverage of third party network driver support, refer to the [[Compiling Lustre]] article, which will show how to create these packages. Once created, the process for installing these customised packages is very similar to the process described here.
For more comprehensive coverage of third party network driver support, refer to the [[Compiling Lustre]] article, which will show how to create these packages. Once created, the process for installing these customised packages is very similar to the process described here.


'''Note:''' it is recommended that the host operating system is always installed with the latest kernel release supported by the operating system vendor. This ensures that the kernel is protected against known security vulnerabilities and has the latest bug fixes. The Lustre developers work to ensure that Lustre remains compatible across operating system kernel updates for supported platforms. For the latest information on compatibility for Linux kernels, refer to the Lustre [https://git.hpdd.intel.com/?p=fs/lustre-release.git;a=blob;f=lustre/ChangeLog ChangeLog] file.
'''Note:''' it is recommended that the host operating system is always installed with the latest kernel release supported by the operating system vendor. This ensures that the kernel is protected against known security vulnerabilities and has the latest bug fixes. The Lustre developers work to ensure that Lustre remains compatible across operating system kernel updates for supported platforms. For the latest information on compatibility for Linux kernels, refer to the Lustre [https://git.whamcloud.com/?p=fs/lustre-release.git;a=blob;f=lustre/ChangeLog ChangeLog] file.


== Lustre and OpenZFS ==
== Lustre and OpenZFS ==
Line 31: Line 31:
The following instructions can be used to help establish a web server as a YUM repository host for the Lustre packages. The examples make use of the default directory structure for an Apache HTTP server on RHEL / CentOS 7. NGINX and other web servers may use different directory structures to store content.
The following instructions can be used to help establish a web server as a YUM repository host for the Lustre packages. The examples make use of the default directory structure for an Apache HTTP server on RHEL / CentOS 7. NGINX and other web servers may use different directory structures to store content.


'''Note:''' The installation processes used in this article assume that there is a YUM repo definition for the Lustre packages configured on the machines where Lustre will be installed.
'''Note:''' The installation processes used in this article assume that there is a YUM repo definition for the Lustre packages configured on the machines where Lustre will be installed. To install Lustre software packages without using YUM, use the below process to download the packages, then copy the packages to each Lustre machine. Use the command <code>yum localinstall <rpm package> [...]</code> to install the downloaded packages instead of the regular <code>yum</code> commands. When using the <code>yum localinstall</code> command, the full file name for each package is required.


<ol>
<ol>
Line 38: Line 38:
<pre style="overflow-x:auto;">
<pre style="overflow-x:auto;">
cat >/tmp/lustre-repo.conf <<\__EOF
cat >/tmp/lustre-repo.conf <<\__EOF
[lustre-server-2.10.0]
[lustre-server]
name=lustre-server
name=lustre-server
baseurl=https://downloads.hpdd.intel.com/public/lustre/lustre-2.10.0/el7/server
baseurl=https://downloads.whamcloud.com/public/lustre/latest-release/el7/server
# exclude=*debuginfo*
# exclude=*debuginfo*
gpgcheck=0
gpgcheck=0


[lustre-client-2.10.0]
[lustre-client]
name=lustre-client
name=lustre-client
baseurl=https://downloads.hpdd.intel.com/public/lustre/lustre-2.10.0/el7/client
baseurl=https://downloads.whamcloud.com/public/lustre/latest-release/el7/client
# exclude=*debuginfo*
# exclude=*debuginfo*
gpgcheck=0
gpgcheck=0
Line 52: Line 52:
[e2fsprogs-wc]
[e2fsprogs-wc]
name=e2fsprogs-wc
name=e2fsprogs-wc
baseurl=https://downloads.hpdd.intel.com/public/e2fsprogs/latest/el7
baseurl=https://downloads.whamcloud.com/public/e2fsprogs/latest/el7
# exclude=*debuginfo*
# exclude=*debuginfo*
gpgcheck=0
gpgcheck=0
Line 58: Line 58:
</pre>
</pre>


<p>'''Note:''' The above example references the Lustre version explicitly. To always pull the latest version, replace <code>lustre-2.10.0</code> with <code>latest-release</code> in the URLs and remove the version numbers from the repository section header. Always pull in the latest <code>e2fsprogs</code> package unless directed otherwise.
<p>'''Note:''' The above example references the latest Lustre release available. To use a specific version, replace <code>latest-release</code> in the <code>[lustre-server]</code> and <code>[lustre-client]</code> <code>baseurl</code> variables with the version required, e.g., <code>lustre-2.10.1</code>. Always use the latest <code>e2fsprogs</code> package unless directed otherwise.
</p>
</p>


<p>To cut down on the size of the download when testing, uncomment the exclude lines. This will omit the download of the debuginfo packages, which can be large. Nevertheless, it is generally a good idea to pull in these files as well in order that they be readily available to aid debugging.
<p>'''Note:''' With the release of Lustre version 2.10.1, it is possible to use patchless kernels for Lustre servers running LDISKFS. The patchless LDISKFS server distribution does not include a Linux kernel. Instead, patchless servers will use the kernel distributed with the operating system. For the time being, the patchless version does not support project quotas - it is expected to do so in the future however. To use patchless kernels for the Lustre servers, replace the string <code>server</code> with <code>patchless-ldiskfs-server</code> at the end of the <code>[lustre-server]</code> <code>baseurl</code> string. For example:
</p>
 
<pre style="overflow-x:auto;">
baseurl=https://downloads.whamcloud.com/public/lustre/latest-release/el7/patchless-ldiskfs-server
</pre>
 
<p>'''Note:''' To cut down on the size of the download when testing, uncomment the exclude lines. This will omit the download of the debuginfo packages, which can be large. Nevertheless, it is generally a good idea to pull in these files as well in order that they be readily available to aid debugging.
</p>
</p>
</li>
</li>
Line 71: Line 78:
cd /var/www/html/repo
cd /var/www/html/repo
reposync -c /tmp/lustre-repo.conf -n \
reposync -c /tmp/lustre-repo.conf -n \
-r lustre-server-2.10.0 \
-r lustre-server \
-r lustre-client-2.10.0 \
-r lustre-client \
-r e2fsprogs-wc
-r e2fsprogs-wc
</pre>
</pre>
Line 80: Line 87:
<pre style="overflow-x:auto;">
<pre style="overflow-x:auto;">
cd /var/www/html/repo
cd /var/www/html/repo
for i in e2fsprogs-wc lustre-client-2.10.0 lustre-server-2.10.0; do
for i in e2fsprogs-wc lustre-client lustre-server; do
(cd $i && createrepo .)
(cd $i && createrepo .)
done
done
Line 93: Line 100:
<pre style="overflow-x:auto;">
<pre style="overflow-x:auto;">
hn=`hostname --fqdn`
hn=`hostname --fqdn`
cat >/var/www/html/lustre-2.10.0.repo <<__EOF
cat >/var/www/html/lustre.repo <<__EOF
[lustre-server-2.10.0]
[lustre-server]
name=lustre-server
name=lustre-server
baseurl=https://$hn/repo/lustre-server-2.10.0
baseurl=https://$hn/repo/lustre-server
enabled=0
enabled=0
gpgcheck=0
gpgcheck=0
proxy=_none_
proxy=_none_


[lustre-client-2.10.0]
[lustre-client]
name=lustre-client
name=lustre-client
baseurl=https://$hn/repo/lustre-client-2.10.0
baseurl=https://$hn/repo/lustre-client
enabled=0
enabled=0
gpgcheck=0
gpgcheck=0
Line 115: Line 122:
</pre>
</pre>


<p>Make sure that the <code>$hn</code> variable matches the host name that will be used by the Lustre servers and clients to access the YUM web server.
<p>Change the <code>$hn</code> and the file path to the repositories as required. The above example assumes that the repositories are located in the default content directory for an Apache HTTP server (<code>/var/www/html</code> on RHEL / CentOS). Make sure that the <code>$hn</code> variable matches the host name that will be used by the Lustre servers and clients to access the YUM web server. The file needs to be copied to each machine that requires the Lustre software.
</p>
</p>


Line 133: Line 140:
There is a cost for this automatic maintenance: since DKMS compiles the kernel modules from source code, the software development toolchain must also be installed on each machine that will use DKMS. In addition, Lustre's <code>configure</code> script makes decisions during execution about what optional features of Lustre to enable, based on the development packages that have been installed. If a full suite of optional features is required, then the development libraries for those features must be included in the OS payload.
There is a cost for this automatic maintenance: since DKMS compiles the kernel modules from source code, the software development toolchain must also be installed on each machine that will use DKMS. In addition, Lustre's <code>configure</code> script makes decisions during execution about what optional features of Lustre to enable, based on the development packages that have been installed. If a full suite of optional features is required, then the development libraries for those features must be included in the OS payload.


'''Note:''' '''''In order to keep the content concise, the DKMS examples in this document will install the essential configuration only. Refer to [[Compiling Lustre#Development Software Installation for Normal Build Process|Establishing a Build Environment]] in the [[Compiling Lustre]] article for a comprehensive set of packages to install when compiling Lustre from source.'''''
The following command will install the additional packages that are required to facilitate DKMS-based installations. These packages must be installed on every machine that will use DKMS to install the Lustre software:
 
<pre style="overflow-x:auto;">
yum install \
asciidoc audit-libs-devel automake bc \
binutils-devel bison device-mapper-devel elfutils-devel \
elfutils-libelf-devel expect flex gcc gcc-c++ git \
glib2 glib2-devel hmaccalc keyutils-libs-devel krb5-devel ksh \
libattr-devel libblkid-devel libselinux-devel libtool \
libuuid-devel libyaml-devel lsscsi make ncurses-devel \
net-snmp-devel net-tools newt-devel numactl-devel \
parted patchutils pciutils-devel perl-ExtUtils-Embed \
pesign python-devel redhat-rpm-config rpm-build systemd-devel \
tcl tcl-devel tk tk-devel wget xmlto yum-utils zlib-devel
</pre>
 
'''Note:''' Additional packages may be added as dependencies of those listed above. Refer to [[Compiling Lustre#Development Software Installation for Normal Build Process|Establishing a Build Environment]] in the [[Compiling Lustre]] article for a comprehensive set of packages to install when compiling Lustre from source.


== Lustre Server Software Installation ==
== Lustre Server Software Installation ==
Line 141: Line 164:
=== Lustre Servers with Both LDISKFS and ZFS OSD Support ===
=== Lustre Servers with Both LDISKFS and ZFS OSD Support ===


'''Note:''' This configuration provides the widest range of compatibility with the different storage types, mostly useful for migration purposes or for broad compatibility with software such as [https://github.com/intel-hpdd/intel-manager-for-lustre Manager for Lustre].
'''Note:''' This configuration provides the widest range of compatibility with the different storage types supported by Lustre, and is most useful for server upgrade and migration purposes, or for broad compatibility with software such as [https://github.com/intel-hpdd/intel-manager-for-lustre Intel Manager for Lustre].


<ol>
<ol>
Line 164: Line 187:
<pre style="overflow-x:auto;">
<pre style="overflow-x:auto;">
yum -y install \
yum -y install \
http://download.zfsonlinux.org/epel/zfs-release.el7_3.noarch.rpm
http://download.zfsonlinux.org/epel/zfs-release.el7_4.noarch.rpm
</pre>
</pre>
<p>'''Note:''' The RPM package name changes with each release of Red Hat Enterprise Linux (RHEL). At the time of writing, the current release of RHEL is 7.4.
</p>
</li>
</li>


Line 172: Line 198:
<pre style="overflow-x:auto;">
<pre style="overflow-x:auto;">
yum --nogpgcheck --disablerepo=base,extras,updates \
yum --nogpgcheck --disablerepo=base,extras,updates \
--enablerepo=lustre-server-2.10.0 install \
--enablerepo=lustre-server install \
kernel \
kernel \
kernel-devel \
kernel-devel \
Line 182: Line 208:
</li>
</li>


<li>Generate a persistent hostid on the machine, if one does not already exist. This is needed to help protect ZFS zpools against simultaneous imports on multiple servers. For example:
<li>Generate a persistent <code>hostid</code> on the machine, if one does not already exist. This is needed to help protect ZFS zpools against simultaneous imports on multiple servers. For example:


<pre>
<pre>
Line 199: Line 225:


<pre style="overflow-x:auto;">
<pre style="overflow-x:auto;">
yum --nogpgcheck --enablerepo=lustre-server-2.10.0 install \
yum --nogpgcheck --enablerepo=lustre-server install \
kmod-lustre-osd-ldiskfs \
lustre-dkms \
lustre-dkms \
zfs \
kmod-lustre-osd-ldiskfs \
lustre-osd-ldiskfs-mount \
lustre-osd-ldiskfs-mount \
lustre-osd-zfs-mount \
lustre-osd-zfs-mount \
lustre \
lustre \
lustre-resource-agents
lustre-resource-agents \
zfs
</pre>
</pre>
</li>
</li>
Line 240: Line 266:
<pre style="overflow-x:auto;">
<pre style="overflow-x:auto;">
yum -y install \
yum -y install \
http://download.zfsonlinux.org/epel/zfs-release.el7_3.noarch.rpm
http://download.zfsonlinux.org/epel/zfs-release.el7_4.noarch.rpm
</pre>
</pre>
<p>'''Note:''' The RPM package name changes with each release of Red Hat Enterprise Linux (RHEL). At the time of writing, the current release of RHEL is 7.4.
</p>
</li>
</li>


Line 257: Line 286:
</pre>
</pre>


It may be necessary to specify the kernel package version number in order to ensure that a kernel that is compatible with Lustre is installed. For example, Lustre 2.10.0 has support for RHEL kernel 3.10.0-514.21.1.el7:
It may be necessary to specify the kernel package version number in order to ensure that a kernel that is compatible with Lustre is installed. For example, Lustre 2.10.1 has support for RHEL kernel 3.10.0-693.2.2.el7:


<pre style="overflow-x:auto;">
<pre style="overflow-x:auto;">
VER="3.10.0-693.2.2.el7"
yum install \
yum install \
kernel-3.10.0-514.21.1.el7 \
kernel-$VER \
kernel-devel-3.10.0-514.21.1.el7 \
kernel-devel-$VER \
kernel-headers-3.10.0-514.21.1.el7 \
kernel-headers-$VER \
kernel-abi-whitelists-3.10.0-514.21.1.el7 \
kernel-abi-whitelists-$VER \
kernel-tools-3.10.0-514.21.1.el7 \
kernel-tools-$VER \
kernel-tools-libs-3.10.0-514.21.1.el7 \
kernel-tools-libs-$VER \
kernel-tools-libs-devel-3.10.0-514.21.1.el7
kernel-tools-libs-devel-$VER
</pre>
</pre>


<p>Refer to the Lustre [https://git.hpdd.intel.com/?p=fs/lustre-release.git;a=blob;f=lustre/ChangeLog ChangeLog] for the list of supported kernels.
<p>Refer to the Lustre [https://git.whamcloud.com/?p=fs/lustre-release.git;a=blob;f=lustre/ChangeLog ChangeLog] for the list of supported kernels.
</p>
</p>
</li>
</li>


<li>Generate a persistent hostid on the machine, if one does not already exist. This is needed to help protect ZFS zpools against simultaneous imports on multiple servers. For example:
<li>Generate a persistent <code>hostid</code> on the machine, if one does not already exist. This is needed to help protect ZFS zpools against simultaneous imports on multiple servers. For example:


<pre>
<pre>
Line 292: Line 322:


<pre style="overflow-x:auto;">
<pre style="overflow-x:auto;">
yum --nogpgcheck --enablerepo=lustre-server-2.10.0 install \
yum --nogpgcheck --enablerepo=lustre-server install \
lustre-dkms \
lustre-dkms \
zfs \
lustre-osd-zfs-mount \
lustre-osd-zfs-mount \
lustre \
lustre \
lustre-resource-agents
lustre-resource-agents \
zfs
</pre>
</pre>
</li>
</li>
Line 332: Line 362:
<pre style="overflow-x:auto;">
<pre style="overflow-x:auto;">
yum --nogpgcheck --disablerepo=base,extras,updates \
yum --nogpgcheck --disablerepo=base,extras,updates \
--enablerepo=lustre-server-2.10.0 install \
--enablerepo=lustre-server install \
kernel \
kernel \
kernel-devel \
kernel-devel \
Line 352: Line 382:


<pre style="overflow-x:auto;">
<pre style="overflow-x:auto;">
yum --nogpgcheck --enablerepo=lustre-server-2.10.0 install \
yum --nogpgcheck --enablerepo=lustre-server install \
kmod-lustre \
kmod-lustre \
kmod-lustre-osd-ldiskfs \
kmod-lustre-osd-ldiskfs \
Line 396: Line 426:
</pre>
</pre>


It may be necessary to specify the kernel package version number in order to ensure that a kernel that is compatible with Lustre is installed. For example, Lustre 2.10.0 has support for RHEL kernel 3.10.0-514.21.1.el7:
It may be necessary to specify the kernel package version number in order to ensure that a kernel that is compatible with Lustre is installed. For example, Lustre 2.10.1 has support for RHEL kernel 3.10.0-693.2.2.el7:


<pre style="overflow-x:auto;">
<pre style="overflow-x:auto;">
VER="3.10.0-693.2.2.el7"
yum install \
yum install \
kernel-3.10.0-514.21.1.el7 \
kernel-$VER \
kernel-devel-3.10.0-514.21.1.el7 \
kernel-devel-$VER \
kernel-headers-3.10.0-514.21.1.el7 \
kernel-headers-$VER \
kernel-abi-whitelists-3.10.0-514.21.1.el7 \
kernel-abi-whitelists-$VER \
kernel-tools-3.10.0-514.21.1.el7 \
kernel-tools-$VER \
kernel-tools-libs-3.10.0-514.21.1.el7 \
kernel-tools-libs-$VER \
kernel-tools-libs-devel-3.10.0-514.21.1.el7
kernel-tools-libs-devel-$VER
</pre>
</pre>


<p>Refer to the Lustre [https://git.hpdd.intel.com/?p=fs/lustre-release.git;a=blob;f=lustre/ChangeLog ChangeLog] for the list of supported kernels.
<p>Refer to the Lustre [https://git.whamcloud.com/?p=fs/lustre-release.git;a=blob;f=lustre/ChangeLog ChangeLog] for the list of supported kernels.
</p>
</p>
</li>
</li>
Line 432: Line 463:
<li>Install the Lustre client user-space tools and DKMS kernel module package:
<li>Install the Lustre client user-space tools and DKMS kernel module package:
<pre style="overflow-x:auto;">
<pre style="overflow-x:auto;">
yum --nogpgcheck --enablerepo=lustre-client-2.10.0 install \
yum --nogpgcheck --enablerepo=lustre-client install \
lustre-client \
lustre-client-dkms \
lustre-client-dkms
lustre-client
</pre>
</pre>
</li>
</li>
Line 442: Line 473:
<pre style="overflow-x:auto;">
<pre style="overflow-x:auto;">
yum --nogpgcheck --enablerepo=lustre-client-2.10.0 install \
yum --nogpgcheck --enablerepo=lustre-client-2.10.0 install \
lustre-client \
kmod-lustre-client \
kmod-lustre-client
lustre-client
</pre>
</pre>
</li>
</li>

Revision as of 05:23, 29 September 2020

The process of installing Lustre software is straightforward, but there are several options that need to be considered. These options are driven by the fact that Lustre is implemented as kernel modules, and has dependencies on other kernel modules in order to operate correctly. Specifically, Lustre is layered on top of block storage devices formatted either as LDISKFS (a variant of EXT4) or ZFS, and Lustre's RDMA networking driver leverages interfaces to the device drivers for RDMA capable fabrics such as InfiniBand and Intel Omni-path Architecture (OPA).

This article demonstrates how to install Lustre with LDISKFS and/or ZFS support, but will not specifically cover third party device drivers such as OFED, since there are a number of variations on implementation depending on the hardware vendor.

For more comprehensive coverage of third party network driver support, refer to the Compiling Lustre article, which will show how to create these packages. Once created, the process for installing these customised packages is very similar to the process described here.

Note: it is recommended that the host operating system is always installed with the latest kernel release supported by the operating system vendor. This ensures that the kernel is protected against known security vulnerabilities and has the latest bug fixes. The Lustre developers work to ensure that Lustre remains compatible across operating system kernel updates for supported platforms. For the latest information on compatibility for Linux kernels, refer to the Lustre ChangeLog file.

Lustre and OpenZFS

OpenZFS support for Lustre Object Storage Devices (OSDs) was introduced in Lustre version 2.4. ZFS is an integrated file system and storage management platform with strong data integrity and volume management features that complement the performance and scalability of Lustre.

The Linux kernel does not require Lustre-specific patches when using ZFS as the storage platform for Lustre servers. The ZFS kernel modules will be compiled against the kernel currently running on the target host.

The installation process for ZFS-based builds is more complex than for LDISKFS due to complications arising from an incompatibility in the distribution clauses of the licenses for the Linux kernel and OpenZFS. Linux is distributed under the terms of the GPLv2, while OpenZFS is governed by the CDDL. Both GPL and CDDL are free software open source licenses, but certain clauses create an incompatibility that prevents their distribution together in binary form. See the note at the end of this section for information on the license incompatibility.

Fortunately, by making use of a software distribution framework called Dynamic Kernel Modules Support (DKMS), OpenZFS can be packaged in a format that is easy for system integrators and operators to build and install. DKMS also ensures that any kernel modules are automatically recompiled if the kernel is updated.

The documentation here will focus on using DKMS. Refer to the Compiling Lustre article for information on creating binary kernel module packages for ZFS on Linux.

For the DKMS mechanism to work, compiler tools and some additional libraries will be needed on each OpenZFS-based Lustre server. DKMS recompiles DKMS-enabled kernel modules whenever a kernel update is installed, which means the compiler tool-chain must be present on all systems using the OpenZFS file system. The kernel-devel and kernel-headers packages for any new Linux kernel are also required.

Note: The CDDL (the license of OpenZFS) and GPLv2 (the license of Linux) are considered incompatible by the FSF (the authors of the GPL; see https://www.gnu.org/licenses/license-list.html#CDDL), but does not prohibit end users from using OpenZFS with Linux together in ways that don’t invoke that incompatibility. The Lustre community does not distribute compiled binaries of OpenZFS kernel modules for Linux. Consider seeking legal advice for any activities that might be considered “distribution” under GPLv2.

Using YUM to Manage Local Software Provisioning

To streamline the installation process, the Lustre packages can be copied to an HTTP server on the network and incorporated into local YUM repositories.

Using YUM repositories simplifies the distribution of software packages to computers, aiding provisioning and configuration automation, and simplifying tasks such as auditing and updating.

The following instructions can be used to help establish a web server as a YUM repository host for the Lustre packages. The examples make use of the default directory structure for an Apache HTTP server on RHEL / CentOS 7. NGINX and other web servers may use different directory structures to store content.

Note: The installation processes used in this article assume that there is a YUM repo definition for the Lustre packages configured on the machines where Lustre will be installed. To install Lustre software packages without using YUM, use the below process to download the packages, then copy the packages to each Lustre machine. Use the command yum localinstall <rpm package> [...] to install the downloaded packages instead of the regular yum commands. When using the yum localinstall command, the full file name for each package is required.

  1. Create a temporary YUM repository definition. This will be used to assist with the initial acquisition of Lustre and related packages.
    cat >/tmp/lustre-repo.conf <<\__EOF
    [lustre-server]
    name=lustre-server
    baseurl=https://downloads.whamcloud.com/public/lustre/latest-release/el7/server
    # exclude=*debuginfo*
    gpgcheck=0
    
    [lustre-client]
    name=lustre-client
    baseurl=https://downloads.whamcloud.com/public/lustre/latest-release/el7/client
    # exclude=*debuginfo*
    gpgcheck=0
    
    [e2fsprogs-wc]
    name=e2fsprogs-wc
    baseurl=https://downloads.whamcloud.com/public/e2fsprogs/latest/el7
    # exclude=*debuginfo*
    gpgcheck=0
    __EOF
    

    Note: The above example references the latest Lustre release available. To use a specific version, replace latest-release in the [lustre-server] and [lustre-client] baseurl variables with the version required, e.g., lustre-2.10.1. Always use the latest e2fsprogs package unless directed otherwise.

    Note: With the release of Lustre version 2.10.1, it is possible to use patchless kernels for Lustre servers running LDISKFS. The patchless LDISKFS server distribution does not include a Linux kernel. Instead, patchless servers will use the kernel distributed with the operating system. For the time being, the patchless version does not support project quotas - it is expected to do so in the future however. To use patchless kernels for the Lustre servers, replace the string server with patchless-ldiskfs-server at the end of the [lustre-server] baseurl string. For example:

    baseurl=https://downloads.whamcloud.com/public/lustre/latest-release/el7/patchless-ldiskfs-server
    

    Note: To cut down on the size of the download when testing, uncomment the exclude lines. This will omit the download of the debuginfo packages, which can be large. Nevertheless, it is generally a good idea to pull in these files as well in order that they be readily available to aid debugging.

  2. Use the reposync command (distributed in the yum-utils package) to download mirrors of the Lustre repositories to the manager server:
    mkdir -p /var/www/html/repo
    cd /var/www/html/repo
    reposync -c /tmp/lustre-repo.conf -n \
    -r lustre-server \
    -r lustre-client \
    -r e2fsprogs-wc
    
  3. Create the repository metadata:
    cd /var/www/html/repo
    for i in e2fsprogs-wc lustre-client lustre-server; do
    (cd $i && createrepo .)
    done
    
  4. Create a YUM repository definition file. The following script creates a file containing repository definitions for the Lustre packages, and stores it in the web server static content directory. This makes it easy to distribute to the Lustre servers and clients.

    Review the content and adjust according to the requirements of the target environment. Run the script on the web server host:

    hn=`hostname --fqdn`
    cat >/var/www/html/lustre.repo <<__EOF
    [lustre-server]
    name=lustre-server
    baseurl=https://$hn/repo/lustre-server
    enabled=0
    gpgcheck=0
    proxy=_none_
    
    [lustre-client]
    name=lustre-client
    baseurl=https://$hn/repo/lustre-client
    enabled=0
    gpgcheck=0
    
    [e2fsprogs-wc]
    name=e2fsprogs-wc
    baseurl=https://$hn/repo/e2fsprogs-wc
    enabled=0
    gpgcheck=0
    __EOF
    

    Change the $hn and the file path to the repositories as required. The above example assumes that the repositories are located in the default content directory for an Apache HTTP server (/var/www/html on RHEL / CentOS). Make sure that the $hn variable matches the host name that will be used by the Lustre servers and clients to access the YUM web server. The file needs to be copied to each machine that requires the Lustre software.

    The use of the version numbers for the repository definitions is a matter of preference, and can be altered. As new versions of Lustre are released, these version numbers will, naturally, need to be changed.

  5. Apply any configuration changes that may be necessary for the web server to incorporate the new bundle directories. The configuration may need to be reloaded, or the web service restarted when done.
  6. Copy the Lustre repo definition file onto each of the Lustre servers and clients, in the directory /etc/yum.repos.d/. Utilities like curl and wget can be used to retrieve the file from the web server as part of a configuration management system rule/promise or during system provisioning.

Using DKMS

DKMS provides a dynamic framework for managing kernel modules that are not included as part of the standard Linux kernel for RHEL or CentOS distributions. With DKMS, modules are automatically compiled from source for each kernel that is installed on the target machine. If a new kernel is installed, DKMS ensures that the modules are updated automatically to run with the new kernel. By automatically updating the kernel modules, DKMS can simplify maintenance of the machine's software.

There is a cost for this automatic maintenance: since DKMS compiles the kernel modules from source code, the software development toolchain must also be installed on each machine that will use DKMS. In addition, Lustre's configure script makes decisions during execution about what optional features of Lustre to enable, based on the development packages that have been installed. If a full suite of optional features is required, then the development libraries for those features must be included in the OS payload.

The following command will install the additional packages that are required to facilitate DKMS-based installations. These packages must be installed on every machine that will use DKMS to install the Lustre software:

yum install \
asciidoc audit-libs-devel automake bc \
binutils-devel bison device-mapper-devel elfutils-devel \
elfutils-libelf-devel expect flex gcc gcc-c++ git \
glib2 glib2-devel hmaccalc keyutils-libs-devel krb5-devel ksh \
libattr-devel libblkid-devel libselinux-devel libtool \
libuuid-devel libyaml-devel lsscsi make ncurses-devel \
net-snmp-devel net-tools newt-devel numactl-devel \
parted patchutils pciutils-devel perl-ExtUtils-Embed \
pesign python-devel redhat-rpm-config rpm-build systemd-devel \
tcl tcl-devel tk tk-devel wget xmlto yum-utils zlib-devel

Note: Additional packages may be added as dependencies of those listed above. Refer to Establishing a Build Environment in the Compiling Lustre article for a comprehensive set of packages to install when compiling Lustre from source.

Lustre Server Software Installation

Select one of the procedures described in the following sections and install the Lustre software on each of the machines that will be used as Lustre servers.

Lustre Servers with Both LDISKFS and ZFS OSD Support

Note: This configuration provides the widest range of compatibility with the different storage types supported by Lustre, and is most useful for server upgrade and migration purposes, or for broad compatibility with software such as Intel Manager for Lustre.

  1. Install the Lustre e2fsprogs distribution:
    yum --nogpgcheck --disablerepo=* --enablerepo=e2fsprogs-wc \
    install e2fsprogs
    
  2. Install EPEL repository support:
    yum -y install epel-release
    
  3. Follow the instructions from the ZFS on Linux project to install the ZFS YUM repository definition. Use the DKMS package repository (the default).

    For example:

    yum -y install \
    http://download.zfsonlinux.org/epel/zfs-release.el7_4.noarch.rpm
    

    Note: The RPM package name changes with each release of Red Hat Enterprise Linux (RHEL). At the time of writing, the current release of RHEL is 7.4.

  4. Install the Lustre-patched kernel packages. Ensure that the Lustre repository is picked for the kernel packages, by disabling the OS repos:
    yum --nogpgcheck --disablerepo=base,extras,updates \
    --enablerepo=lustre-server install \
    kernel \
    kernel-devel \
    kernel-headers \
    kernel-tools \
    kernel-tools-libs \
    kernel-tools-libs-devel
    
  5. Generate a persistent hostid on the machine, if one does not already exist. This is needed to help protect ZFS zpools against simultaneous imports on multiple servers. For example:
    hid=`[ -f /etc/hostid ] && od -An -tx /etc/hostid|sed 's/ //g'`
    [ "$hid" = `hostid` ] || genhostid
    
  6. Reboot the node.
    reboot
    
  7. Install Lustre, and the LDISKFS and ZFS kmod packages:
    yum --nogpgcheck --enablerepo=lustre-server install \
    kmod-lustre-osd-ldiskfs \
    lustre-dkms \
    lustre-osd-ldiskfs-mount \
    lustre-osd-zfs-mount \
    lustre \
    lustre-resource-agents \
    zfs
    
  8. Load the Lustre and ZFS kernel modules to verify that the software has installed correctly:
    modprobe -v zfs
    modprobe -v lustre
    
  9. Before continuing to configuration of LNet, unload the Lustre modules from the kernel:
    lustre_rmmod
    

Lustre Servers with ZFS OSD Support

  1. Install EPEL repository support:
    yum -y install epel-release
    
  2. Follow the instructions from the ZFS on Linux project to install the ZFS YUM repository definition. Use the DKMS package repository (the default).

    For example:

    yum -y install \
    http://download.zfsonlinux.org/epel/zfs-release.el7_4.noarch.rpm
    

    Note: The RPM package name changes with each release of Red Hat Enterprise Linux (RHEL). At the time of writing, the current release of RHEL is 7.4.

  3. Install the kernel packages that match the latest supported version for the Lustre release:
    yum install \
    kernel \
    kernel-devel \
    kernel-headers \
    kernel-abi-whitelists \
    kernel-tools \
    kernel-tools-libs \
    kernel-tools-libs-devel
    

    It may be necessary to specify the kernel package version number in order to ensure that a kernel that is compatible with Lustre is installed. For example, Lustre 2.10.1 has support for RHEL kernel 3.10.0-693.2.2.el7:

    VER="3.10.0-693.2.2.el7"
    yum install \
    kernel-$VER \
    kernel-devel-$VER \
    kernel-headers-$VER \
    kernel-abi-whitelists-$VER \
    kernel-tools-$VER \
    kernel-tools-libs-$VER \
    kernel-tools-libs-devel-$VER
    

    Refer to the Lustre ChangeLog for the list of supported kernels.

  4. Generate a persistent hostid on the machine, if one does not already exist. This is needed to help protect ZFS zpools against simultaneous imports on multiple servers. For example:
    hid=`[ -f /etc/hostid ] && od -An -tx /etc/hostid|sed 's/ //g'`
    [ "$hid" = `hostid` ] || genhostid
    
  5. Reboot the node.
    reboot
    
  6. Install the packages for Lustre and ZFS:
    yum --nogpgcheck --enablerepo=lustre-server install \
    lustre-dkms \
    lustre-osd-zfs-mount \
    lustre \
    lustre-resource-agents \
    zfs
    
  7. Load the Lustre and ZFS kernel modules to verify that the software has installed correctly:
    modprobe -v zfs
    modprobe -v lustre
    
  8. Upon verification, unload the Lustre modules from the kernel:
    lustre_rmmod
    

Lustre Servers with LDISKFS OSD Support

  1. Install the Lustre e2fsprogs distribution:
    yum --nogpgcheck --disablerepo=* --enablerepo=e2fsprogs-wc \
    install e2fsprogs
    
  2. Install the Lustre-patched kernel packages. Ensure that the Lustre repository is picked for the kernel packages, by disabling the OS repos:
    yum --nogpgcheck --disablerepo=base,extras,updates \
    --enablerepo=lustre-server install \
    kernel \
    kernel-devel \
    kernel-headers \
    kernel-tools \
    kernel-tools-libs \
    kernel-tools-libs-devel
    
  3. Reboot the node:
    reboot
    
  4. Install the LDISKFS kmod and other Lustre packages:
    yum --nogpgcheck --enablerepo=lustre-server install \
    kmod-lustre \
    kmod-lustre-osd-ldiskfs \
    lustre-osd-ldiskfs-mount \
    lustre \
    lustre-resource-agents
    
  5. Load the Lustre kernel modules to verify that the software has installed correctly:
    modprobe -v lustre
    
  6. Upon verification, unload the Lustre modules from the kernel:
    lustre_rmmod
    

Lustre Client Software Installation

The Lustre client software comprises a package containing the kernel modules and separate packages for user-space tools used to manage the client software. The Lustre clients do not require a "Lustre-patched" kernel, which simplifies installation.

Execute the following steps on each machine in that will use the Lustre client:

  1. Install the kernel packages that match the latest supported version for the Lustre release:
    yum install \
    kernel \
    kernel-devel \
    kernel-headers \
    kernel-abi-whitelists \
    kernel-tools \
    kernel-tools-libs \
    kernel-tools-libs-devel
    

    It may be necessary to specify the kernel package version number in order to ensure that a kernel that is compatible with Lustre is installed. For example, Lustre 2.10.1 has support for RHEL kernel 3.10.0-693.2.2.el7:

    VER="3.10.0-693.2.2.el7"
    yum install \
    kernel-$VER \
    kernel-devel-$VER \
    kernel-headers-$VER \
    kernel-abi-whitelists-$VER \
    kernel-tools-$VER \
    kernel-tools-libs-$VER \
    kernel-tools-libs-devel-$VER
    

    Refer to the Lustre ChangeLog for the list of supported kernels.

  2. Reboot the node:
    reboot
    
  3. Install the Lustre client packages:
    • For DKMS installs:
      1. First, install the EPEL repository definition. EPEL provides the DKMS software:
        yum install epel-release
        
      2. Install the Lustre client user-space tools and DKMS kernel module package:
        yum --nogpgcheck --enablerepo=lustre-client install \
        lustre-client-dkms \
        lustre-client
        
    • For binary kernel module (kmod) installs, run the following command:
      yum --nogpgcheck --enablerepo=lustre-client-2.10.0 install \
      kmod-lustre-client \
      lustre-client
      
  4. Load the Lustre kernel modules to verify that the software has installed correctly:
    modprobe -v lustre
    
  5. Upon verification, unload the Lustre modules from the kernel:
    lustre_rmmod