Lustre with Virtualbox install: Difference between revisions
Line 20: | Line 20: | ||
By default Rocky Linux does not have it's internet connection enabled. If you have installed the GUI version of Rocky simply go to settings and enable the connection. Also go to the advanced settings and check off "enable by default" to avoid having to do this step after every reboot. However if you are using the minimal install ISO the quickest way to get it up and running is to use the | By default Rocky Linux does not have it's internet connection enabled. If you have installed the GUI version of Rocky simply go to settings and enable the connection. Also go to the advanced settings and check off "enable by default" to avoid having to do this step after every reboot. However if you are using the minimal install ISO the quickest way to get it up and running is to use the | ||
<pre>sudo | <pre>sudo nmtui</pre> | ||
command. Enable the wired connection. Edit the connection afterwards to enable the automatically connect option as well. While in the nmtui menu we will also set the hostname to "lustre-server" for clarity. After you have set the hostname reboot the VM and we are now ready to begin install Lustre. | command. Enable the wired connection. Edit the connection afterwards to enable the automatically connect option as well. While in the nmtui menu we will also set the hostname to "lustre-server" for clarity. After you have set the hostname reboot the VM and we are now ready to begin install Lustre. | ||
Latest revision as of 10:54, 30 May 2024
Introduction
This page provides an up to date installation guide for Lustre, using VirtualBox and Rocky Linux. Useful for users who want quick access to lustre software and system for testing, code development and more.
Building Lustre with LDISKFS (as of 2024-05-20)
The following guide uses VirtualBox 7.0, Rocky Linux 8.9 and Lustre 2.15.63 on a 64-bit operating system.
Setting up VirtualBox
Previous knowledge of VirtualBox is helpful, but not required for this tutorial. As previously stated we will be using VirtualBox 7.0 and Rocky Linux 8.9. We recommend the following minimum hardware settings for the VM:
- 4-8gb ram
- 4 cpu cores
- 30gb hard drive space
After installing the Rocky Linux ISO we will change the network settings on virtual box to enable SSH forwarding. In VirtualBox go to settings -> network and make sure it is attached to NAT. Then click on Port Forwarding and add a new rule called "SSH forwarding". The protocol should be TCP, and for this tutorial we will set the host port to 1122 but can be set to anything. Set the guest port to 22 and leave the Host IP/Guest IP empty.
Setting up Rocky Linux
By default Rocky Linux does not have it's internet connection enabled. If you have installed the GUI version of Rocky simply go to settings and enable the connection. Also go to the advanced settings and check off "enable by default" to avoid having to do this step after every reboot. However if you are using the minimal install ISO the quickest way to get it up and running is to use the
sudo nmtui
command. Enable the wired connection. Edit the connection afterwards to enable the automatically connect option as well. While in the nmtui menu we will also set the hostname to "lustre-server" for clarity. After you have set the hostname reboot the VM and we are now ready to begin install Lustre.
Installing Lustre (Client and Server)
Installing dependencies
To build lustre, the specific packages and dependences must be downloaded on the build server. Use the following commands to install the pre-requisit software tools required for lustre.
sudo dnf install git libtool flex bison wget sudo dnf --enablerepo=devel install libmount-devel libyaml-devel libnl3-devel e2fsprogs-devel
Use the following command to install kernel packages required for LDISKFS using dnf. It is important to have the entire set of packages from the same version (currently using v4.18.0).
sudo dnf install https://build.whamcloud.com/job/lustre-master/arch=x86_64,build_type=server,\ distro=el8.9,ib_stack=inkernel/lastSuccessfulBuild/artifact/artifacts/RPMS/x86_64/kernel-4.18.0-513.18.1.el8_lustre.x86_64.rpm \ https://build.whamcloud.com/job/lustre-master/arch=x86_64,build_type=server,\ distro=el8.9,ib_stack=inkernel/lastSuccessfulBuild/artifact/artifacts/RPMS/x86_64/kernel-core-4.18.0-513.18.1.el8_lustre.x86_64.rpm \ https://build.whamcloud.com/job/lustre-master/arch=x86_64,build_type=server,\ distro=el8.9,ib_stack=inkernel/lastSuccessfulBuild/artifact/artifacts/RPMS/x86_64/kernel-debuginfo-4.18.0-513.18.1.el8_lustre.x86_64.rpm \ https://build.whamcloud.com/job/lustre-master/arch=x86_64,build_type=server,\ distro=el8.9,ib_stack=inkernel/lastSuccessfulBuild/artifact/artifacts/RPMS/x86_64/kernel-debuginfo-common-x86_64-4.18.0-513.18.1.el8_lustre.x86_64.rpm \ https://build.whamcloud.com/job/lustre-master/arch=x86_64,build_type=server,\ distro=el8.9,ib_stack=inkernel/lastSuccessfulBuild/artifact/artifacts/RPMS/x86_64/kernel-devel-4.18.0-513.18.1.el8_lustre.x86_64.rpm \ https://build.whamcloud.com/job/lustre-master/arch=x86_64,build_type=server,\ distro=el8.9,ib_stack=inkernel/lastSuccessfulBuild/artifact/artifacts/RPMS/x86_64/kernel-headers-4.18.0-513.18.1.el8_lustre.x86_64.rpm \ https://build.whamcloud.com/job/lustre-master/arch=x86_64,build_type=server,\ distro=el8.9,ib_stack=inkernel/lastSuccessfulBuild/artifact/artifacts/RPMS/x86_64/kernel-modules-4.18.0-513.18.1.el8_lustre.x86_64.rpm \ https://build.whamcloud.com/job/lustre-master/arch=x86_64,build_type=server,\ distro=el8.9,ib_stack=inkernel/lastSuccessfulBuild/artifact/artifacts/RPMS/x86_64/kernel-modules-internal-4.18.0-513.18.1.el8_lustre.x86_64.rpm \ https://dl.fedoraproject.org/pub/epel/8/Everything/x86_64/Packages/p/p7zip-16.02-20.el8.x86_64.rpm \ https://dl.fedoraproject.org/pub/epel/8/Everything/x86_64/Packages/q/quilt-0.66-2.el8.noarch.rpm
The last change we need to make is to patch the e2fsprogs to the Lustre version. This can be done by editing the 'dnf.conf' file vi /etc/dnf/dnf.conf
and adding
[Lustre-e2fsprogs] name=Lustre-e2fsprogs baseurl=http://downloads.whamcloud.com/public/e2fsprogs/latest/el$releasever/ gpgcheck=0 enabled=1
After saving the changes to the dnf.conf file we can apply the patch.
sudo dnf update e2fsprogs
Everything should now be installed, we can now compile Lustre
Compiling and Building Lustre
To obtain the Lustre source use the following commands:
mkdir /usr/src/lustre-head cd /usr/src/lustre-head git clone git://git.whamcloud.com/fs/lustre-release.git /usr/src/lustre-head
This will have cloned the repository into the directory 'lustre-head'. Before comencing the build process, first assure that the current directory is 'lustre-head' or the location where the repository has been cloned (if "lustre-head
" is excluded from the git clone
command, the repository will be named lustre-release).
Run the following commands to build lustre:
sh autogen.sh ./configure sudo make sudo make install
The make process might take a while, this is normal. Reboot the VM one last time after 'make install' completes. After the reboot Lustre has been successfully installed.
Testing and Build confirmation
To ensure that what we have done was successful we can go into the tests folder and run the llmount.sh script to mount the LDISKFS
cd /usr/src/lustre-head/lustre/tests sudo ./llmount.sh
After running this script we should be able to see them mounted in '/mnt'
cd /mnt ls
If you can see "lustre lustre-mds1 lustre-ost1 lustre-ost2" everything has been built properly. If you would like to run the other tests that are included with Lustre we need to add a dummy user.
groupadd -g 500 group500 useradd -u 500 -g 500 runas
You can now also run all of the tests that are included in the lustre/tests folder.