KVM Quick Start Guide: Difference between revisions
(update hpdd.intel.com URLs to whamcloud.com) |
SoyMimikyu (talk | contribs) |
||
(15 intermediate revisions by the same user not shown) | |||
Line 11: | Line 11: | ||
Lustre scales for capacity, redundancy, and concurrency by arranging and adding storage components. The metadata servers keep track of object locations, while facilitating client-to-storage connections and arbitration of access to file system data. | Lustre scales for capacity, redundancy, and concurrency by arranging and adding storage components. The metadata servers keep track of object locations, while facilitating client-to-storage connections and arbitration of access to file system data. | ||
For more details on the architecture and internals of Lustre refer to the following pages: | |||
* [https://www.lustre.org/documentation/ Lustre Manual] | |||
* [[media:LustreArchitecture-v4.pdf | Lustre Architecture (PDF)]] | |||
* [https://info.ornl.gov/sites/publications/Files/Pub166872.pdf Understanding Lustre Internals, Second Edition] | |||
= | = Lustre Requirements = | ||
Lustre is only supported on Linux. Because Lustre implementation involves modifications to the Linux kernel, prebuilt packages are only available for a handful of kernels, which are compatible with Red Hat and Ubuntu. Nevertheless, several Linux distributions build their own Lustre packages. Because most feature development takes place on Red Hat compatible distributions, this document focuses on working with a RHEL compatible distribution. | |||
<br> | <br> | ||
You can group all Lustre file system components together on a single compute node, at the cost of performance, capacity, and redundancy requirements. | |||
<br> | <br> | ||
= | = Virtualization and Operating System Selection = | ||
This guide uses KVM virtualization to create a very simple Lustre cluster. It also assumes that you have a working installation of qemu-kvm-1.5, virt-install, and qemu-kvm-tools-1.5 or later running on a host machine running RHEL-compatible Linux. Running a version of Linux that is compatible with RHEL 7 or RHEL 8 provides the best development platform for the purposes of this document. This document uses CentOS 7, a Red Hat Enterprise compatible distribution, to install Lustre. | |||
For the purpose of this document, we will be using CentOS 7, a Red Hat Enterprise compatible distribution to install Lustre on. Rocky Linux 8 is a Red Hat 8 compatible distribution that can also be used. | |||
The repository for CentOS 7 for installation is available here: | |||
: https://vault.centos.org/7.9.2009/isos/x86_64/ | |||
The repository for Rocky Linux 8.10: | |||
: https://rockylinux.org/download | |||
This document discusses and demonstrates the steps involved in installing Lustre and provides bash-compatible scripts to use as reference for a semi-automated setup. | |||
This document discusses and demonstrates the steps involved in installing Lustre | |||
= Creating The CentOS 7 KVM Virtual Machines = | = Creating The CentOS 7 KVM Virtual Machines = | ||
Line 157: | Line 134: | ||
* Sets the time zone to Denver Colorado | * Sets the time zone to Denver Colorado | ||
** You can find other valid locations in /usr/share/zoneinfo. | ** You can find other valid locations in /usr/share/zoneinfo. | ||
* Boot straps the install from the CentOS mirror repository for version 7. | * Boot straps the install from the CentOS mirror repository for version 7.9 | ||
* Disables SELINUX (Secure Linux Features). | * Disables SELINUX (Secure Linux Features). | ||
** Lustre on SELINUX is not entirely straight forward to configure. | ** Lustre on SELINUX is not entirely straight forward to configure. | ||
Line 163: | Line 140: | ||
* Modifies the /etc/sudoers file to allow passwordless sudo to root. | * Modifies the /etc/sudoers file to allow passwordless sudo to root. | ||
* Sets the password for both the 'dev' and 'root' users to "seagate" | * Sets the password for both the 'dev' and 'root' users to "seagate" | ||
* Performs a minimal install of CentOS 7. | * Performs a minimal install of CentOS 7.9 while adding the packages needed to develop and debug Lustre. | ||
* Creates a new repository entry for YUM that takes precedence over all others, from which to install Lustre. | * Creates a new repository entry for YUM that takes precedence over all others, from which to install Lustre. | ||
* Creates a several scripts in the root user's home directory to install and configure Lustre server and client nodes. | * Creates a several scripts in the root user's home directory to install and configure Lustre server and client nodes. | ||
Line 194: | Line 171: | ||
#: --disk device=disk,path="$PWD/$VM_NAME-sys.raw",size=15,format=raw \ | #: --disk device=disk,path="$PWD/$VM_NAME-sys.raw",size=15,format=raw \ | ||
#: --disk device=disk,path="$PWD/$VM_NAME-data.raw",size=1,format=raw \ | #: --disk device=disk,path="$PWD/$VM_NAME-data.raw",size=1,format=raw \ | ||
#: --os-variant rhel7. | #: --os-variant rhel7.9 \ | ||
#: --location | #: --location https://vault.centos.org/7.9.2009/isos/x86_64/ \ | ||
#: --noautoconsole \ | #: --noautoconsole \ | ||
#: --graphics vnc,listen=0.0.0.0 \ | #: --graphics vnc,listen=0.0.0.0 \ | ||
Line 229: | Line 206: | ||
Populate it with the following information: | Populate it with the following information: | ||
[ | [lustre_latest_el7_client] | ||
name=whamcloud_lustre_client | name=whamcloud_lustre_client | ||
baseurl=https://downloads.whamcloud.com/public/lustre/ | baseurl=https://downloads.whamcloud.com/public/lustre/lustre-2.13.0/el7/client/ | ||
enabled=1 | enabled=1 | ||
priority=5 | priority=5 | ||
gpgcheck=0 | gpgcheck=0 | ||
[ | [lustre_latest_el7_server] | ||
name=whamcloud_lustre_server | name=whamcloud_lustre_server | ||
baseurl=https://downloads.whamcloud.com/public/lustre/ | baseurl=https://downloads.whamcloud.com/public/lustre/lustre-2.13.0/el7/server/ | ||
enabled=1 | enabled=1 | ||
priority=5 | priority=5 | ||
Line 245: | Line 222: | ||
[e2fsprogs_latest] | [e2fsprogs_latest] | ||
name=whamcloud_e2fsprogs | name=whamcloud_e2fsprogs | ||
baseurl=https://downloads.whamcloud.com/public/e2fsprogs/latest/ | baseurl=https://downloads.whamcloud.com/public/e2fsprogs/latest/el7/ | ||
enabled=1 | enabled=1 | ||
priority=5 | priority=5 | ||
gpgcheck=0 | gpgcheck=0 | ||
Make sure yum-plugin-priorities is installed: | Make sure yum-plugin-priorities is installed: | ||
yum install yum-plugin-priorities | yum install yum-plugin-priorities | ||
Line 332: | Line 310: | ||
# Install from web server | # Install from web server | ||
url --url= | url --url=https://vault.centos.org/7.9.2009/isos/x86_64/ | ||
# Disable Setup Agent on first boot | # Disable Setup Agent on first boot | ||
Line 451: | Line 429: | ||
# Create install script for Lustre from wham cloud | # Create install script for Lustre from wham cloud | ||
cat >/etc/yum.repos.d/whamcloud.repo<<'REPO_DEFINITION' | cat >/etc/yum.repos.d/whamcloud.repo<<'REPO_DEFINITION' | ||
[ | [lustre_latest_el7_client] | ||
name=whamcloud_lustre_client | name=whamcloud_lustre_client | ||
baseurl=https://downloads.whamcloud.com/public/lustre/ | baseurl=https://downloads.whamcloud.com/public/lustre/lustre-2.13.0/el7/client/ | ||
enabled=1 | enabled=1 | ||
priority=5 | priority=5 | ||
gpgcheck=0 | gpgcheck=0 | ||
[lustre_latest_el7_server] | |||
[ | |||
name=whamcloud_lustre_server | name=whamcloud_lustre_server | ||
baseurl=https://downloads.whamcloud.com/public/lustre/ | baseurl=https://downloads.whamcloud.com/public/lustre/lustre-2.13.0/el7/server/ | ||
enabled=1 | enabled=1 | ||
priority=5 | priority=5 | ||
gpgcheck=0 | gpgcheck=0 | ||
[e2fsprogs_latest] | [e2fsprogs_latest] | ||
name=whamcloud_e2fsprogs | name=whamcloud_e2fsprogs | ||
baseurl=https://downloads.whamcloud.com/public/e2fsprogs/latest/el7 | baseurl=https://downloads.whamcloud.com/public/e2fsprogs/latest/el7/ | ||
enabled=1 | enabled=1 | ||
priority=5 | priority=5 | ||
Line 565: | Line 541: | ||
#!/bin/sh | #!/bin/sh | ||
KICKSTART_FILE="$PWD/ks-base-centos7.cfg" | KICKSTART_FILE="$PWD/ks-base-centos7.cfg" | ||
BOOTSTRAP_URL=" | BOOTSTRAP_URL="https://vault.centos.org/7.9.2009/isos/x86_64/" | ||
INSTALL_URL=" | INSTALL_URL="https://vault.centos.org/7.9.2009/isos/x86_64/" | ||
VM_NAME=$1 | VM_NAME=$1 | ||
# Some default data disk sizes based on role (in gigabytes). | # Some default data disk sizes based on role (in gigabytes). | ||
Line 684: | Line 660: | ||
--disk device=disk,path="$DISK_PATH/$VM_NAME-sys.raw",size=$SYS_DISK_SIZE,format=raw \ | --disk device=disk,path="$DISK_PATH/$VM_NAME-sys.raw",size=$SYS_DISK_SIZE,format=raw \ | ||
--disk device=disk,path="$DISK_PATH/$VM_NAME-data.raw",size=$DATA_DISK_SIZE,format=raw \ | --disk device=disk,path="$DISK_PATH/$VM_NAME-data.raw",size=$DATA_DISK_SIZE,format=raw \ | ||
--os-variant rhel7. | --os-variant rhel7.9 \ | ||
--location "$BOOTSTRAP_URL" \ | --location "$BOOTSTRAP_URL" \ | ||
--noautoconsole \ | --noautoconsole \ |
Latest revision as of 16:57, 23 February 2025
Introduction
This guide was originally provided by John Suykerbuyk and Seagate Technology in support of growing the Lustre community.
Lustre combines a custom in-kernel network stack called Lustre Networking (LNET) along with its own object block devices, which are made up of local file systems, such as ldiskfs or ZFS. Together, these components make up the storage framework used to create a clustered file system.
Like any file on any other file system, a single file is made up of several component parts. The file and directory structure that indexes the file (metadata), and data blocks (objects) that comprise the data within the file. The file system has to keep track of where those objects are located, who and what can access those objects and with what permissions (read, write, and execute). This extra information that describes a user's file is referred to as metadata.
Most conventional file systems combine file data blocks and their metadata on a single, continuous block storage device. The performance and scalability of Lustre comes from a design that aggregates many storage servers through dedicated metadata and object servers to form a single file system that is a superset of its component storage nodes. Clients communicate through LNET protocol to metadata servers and establish direct connections to file data stored on one or more object storage servers. This enables, not only the ability to parallelize a single clients access to file system data across multiple servers, but also to service many clients concurrently across a virtualized file system that goes beyond conventional boundaries of size, capacity, and throughput.
Lustre scales for capacity, redundancy, and concurrency by arranging and adding storage components. The metadata servers keep track of object locations, while facilitating client-to-storage connections and arbitration of access to file system data.
For more details on the architecture and internals of Lustre refer to the following pages:
Lustre Requirements
Lustre is only supported on Linux. Because Lustre implementation involves modifications to the Linux kernel, prebuilt packages are only available for a handful of kernels, which are compatible with Red Hat and Ubuntu. Nevertheless, several Linux distributions build their own Lustre packages. Because most feature development takes place on Red Hat compatible distributions, this document focuses on working with a RHEL compatible distribution.
You can group all Lustre file system components together on a single compute node, at the cost of performance, capacity, and redundancy requirements.
Virtualization and Operating System Selection
This guide uses KVM virtualization to create a very simple Lustre cluster. It also assumes that you have a working installation of qemu-kvm-1.5, virt-install, and qemu-kvm-tools-1.5 or later running on a host machine running RHEL-compatible Linux. Running a version of Linux that is compatible with RHEL 7 or RHEL 8 provides the best development platform for the purposes of this document. This document uses CentOS 7, a Red Hat Enterprise compatible distribution, to install Lustre.
For the purpose of this document, we will be using CentOS 7, a Red Hat Enterprise compatible distribution to install Lustre on. Rocky Linux 8 is a Red Hat 8 compatible distribution that can also be used.
The repository for CentOS 7 for installation is available here:
The repository for Rocky Linux 8.10:
This document discusses and demonstrates the steps involved in installing Lustre and provides bash-compatible scripts to use as reference for a semi-automated setup.
Creating The CentOS 7 KVM Virtual Machines
Installing an operating system involves a bootstrapping process where first an operating system is started from installation media that runs entirely in RAM. It is then used to install the "real" operating system on the target machine. Though there are LOTS of ways to do this, this guide will be doing everything over simple HTTP (web) protocols with no physical media except a floppy disk image we create that will hold the installation instructions.
You need root access on the machine used to host the Lustre virtual machines.
- HINT: To avoid running as root or constantly providing passwords, add yourself to the group "wheel" and modify /etc/sudoers file to enable passwordless sudo.
sudo usermod YOUR_USER_NAME -G wheel sudo visudo
- uncomment (remove prefixed hash mark)
# %wheel ALL=(ALL) NOPASSWD: ALL
- which becomes:
%wheel ALL=(ALL) NOPASSWD: ALL
Before proceeding, create a directory to host the VM files.
mkdir -p ~/lustre_vms cd ~/lustre_vms
Installing CentOS 7 for Lustre under KVM with the helper scripts
Save the create.centos7vm.sh script and the ks-base-centos.cfg file to your lustre_vms directory.
When create.centos7vm.sh is run side-by-side with the ks-base-centos.cfg kickstart file, they will create a generic virtual machines ready to be configured either as a Lustre Client or Server node.
For each kind of the three VMs, run once with the name of the VM as the first parameter
./create.centos7vm.sh lustre_client1 ./create.centos7vm.sh lustre_mgs_mds ./create.centos7vm.sh lustre_oss1
Once each VM is created, configure them for their roles. If you've followed the guide up to this point, you should have three virtual machines based on CentOS 7 configured and setup via the the automated kickstart file. The kickstart file should have created a number of files in /root:
- install_lustre_client_from_wham_cloud
- install_lustre_server_from_whamcloud
- configure_lustre_client
- configure_lustre_mgs_mdt_mds
- configure_lustre_oss
Connect to each of the running vm's and run the install appropriate install scripts:
- sudo virsh console lustre_client1 (log on as root/seagate)
- ./install_lustre_client_from_whamcloud
- reboot
- sudo virsh console lustre_mgs_mds (log on as root/seagate)
- ./install_lustre_server_from_whamcloud
- reboot
- sudo virsh console lustre_oss1 (log on as root/seagate)
- ./install_lustre_server_from_whamcloud
- reboot
Connect to each of the running vm's and configure their roles. The order is important! Configure the mgs/mds first, the OSS next, and finally a client:
- sudo virsh console lustre_mgs_mds (log on as root/seagate)
- ./configure_lustre_mgs_mdt_mds
- sudo virsh console lustre_oss1 (log on as root/seagate)
- ./configure_lustre_oss
- sudo virsh console lustre_client1
- ./configure_lustre_client
Installing CentOS 7 for Lustre under KVM with the without helper scripts
This section can be skipped entirely if the helper scripts are used. They are intended solely to provide a deeper understanding of the steps involved.
Create the floppy disk image
This procedure uses a floppy disk image connected to the virtual machine to provide the setup and configuration instructions needed for installation.
Navigate to the directory where you want your virtual machine files to be stored, and then enter the following commands:
mkdir -p floppy.dir chmod 777 floppy.dir qemu-img create -f raw floppy.img 1440k mkfs.msdos -s 1 floppy.img sudo mount -o loop, floppy.img floppy.dir
If everything completes without error, you should have a file, "floppy.img" whose contents are accessible in the directory "floppy.dir".
- HINT: If qemu-img is not installed, then attempting to create the floppy image generates a error. To install qemu-img, enter the following command:
yum install -y qemu-img
The Kickstart file
The Kickstart file is used by the Anaconda system built into Red Hat Enterprise compatible Linux distributions to install and configure the installation of Lustre's Linux operating system. You can find the full text of the kickstart file in the Scripts and Configuration Files section at the end of this document. Copy the configuration code into a file named ks-base-centos7.cfg in the floppy.dir directory.
The only thing that you must change is the hostname field in the line:
network --hostname=HOSTNAME*
When executed by anaconda, the kickstart does the following:
- Sets the hostname (Set a unique hostname for each virtual machine you create.)
- Configures the virtual machine to bridge to the host eth0 network interface.
- If your primary network connection on the host is not through eth0, then you must modify this line with the correct interface.
- Configures the virtual machine’s network interface to use DHCP.
- If you do not have a DHCP server, this line should be modified to use a static IP configuration.
- Sets the time zone to Denver Colorado
- You can find other valid locations in /usr/share/zoneinfo.
- Boot straps the install from the CentOS mirror repository for version 7.9
- Disables SELINUX (Secure Linux Features).
- Lustre on SELINUX is not entirely straight forward to configure.
- Creates a user named '"dev"', makes the user part of the wheel group.
- Modifies the /etc/sudoers file to allow passwordless sudo to root.
- Sets the password for both the 'dev' and 'root' users to "seagate"
- Performs a minimal install of CentOS 7.9 while adding the packages needed to develop and debug Lustre.
- Creates a new repository entry for YUM that takes precedence over all others, from which to install Lustre.
- Creates a several scripts in the root user's home directory to install and configure Lustre server and client nodes.
Manual Install of CentOS 7 under KVM
After completing the previous step, a directory called floppy.dir exists and contains the ks-base-centos7.cfg configuration file. The floppy.dir directory is also the mount point for the floppy.img disk image.
The following procedure creates three virtual machines:
- A client
- A combination MGS, MDS, and MDT
- An OSS with a single OST
Before configuring the virtualization settings and initiating the kickstart-based installer, you must change both the name of the VM in the virt-install command and the hostname in the kickstart file for each virtual machine. There are three named VMs, each named as follows:
- lustre_mgs_mdt
- lustre_oss1
- lustre_client
It is highly recommended that. for each named VM, you do the following:
- Edit the kickstart file (ks-base-centos7.cfg) in the floppy.dir directory to set the hostname line.
network --hostname=(name of VM)
- Export the virtual machine to the shell using the following command:
export VM_NAME=(name of VM)
- Create the virtual machine by entering the following command:
- sudo virt-install \
- --connect qemu:///system \
- --name "$VM_NAME" \
- --virt-type=kvm \
- --memory 2048 \
- --vcpus=1 \
- --disk device=floppy,path="$PWD/floppy.img" \
- --disk device=disk,path="$PWD/$VM_NAME-sys.raw",size=15,format=raw \
- --disk device=disk,path="$PWD/$VM_NAME-data.raw",size=1,format=raw \
- --os-variant rhel7.9 \
- --location https://vault.centos.org/7.9.2009/isos/x86_64/ \
- --noautoconsole \
- --graphics vnc,listen=0.0.0.0 \
- --accelerate \
- --network=bridge:br0 \
- --extra-args=="console=tty0 console=ttyS0,115200 ksdevice=eth0 ks=hd:fd0:/ks.cfg" \
- --hvm
- Enter the following command to observe and monitor the boot and installation:
sudo virsh console (name of VM)
- 'NOTE:' If virt-install is not installed, then an error appears notifying you that it is not a valid command. To install virt-install, enter the following command:
sudo yum install -y virt-install
Immediately after the previous (long) command, enter the following to observe and monitor the boot and installation:
sudo virsh console (name of VM)
Troubleshooting: * If you get an error about virt-install not being a valid command: - sudo yum install -y virt-install * Likewise, if "virsh" is not found: - sudo yum install -y libvirt libvirt-client libvirt-daemon libvirt-docs
With a bit of luck, you should now have a running virtual machine that is installing itself completely from the web. At the end of the install, it will reboot, but the reboot might fail to restart. You can manually start the virtual machine and re-attach the serial console with the following command:
sudo virt start (name of VM) --console
Manual Install of Lustre Components
The following details how to manually install Lustre without the script or having used the kickstart file.
First, create the yum repo file in:
/etc/yum.repos.d/whamcloud.repo
Populate it with the following information:
[lustre_latest_el7_client] name=whamcloud_lustre_client baseurl=https://downloads.whamcloud.com/public/lustre/lustre-2.13.0/el7/client/ enabled=1 priority=5 gpgcheck=0 [lustre_latest_el7_server] name=whamcloud_lustre_server baseurl=https://downloads.whamcloud.com/public/lustre/lustre-2.13.0/el7/server/ enabled=1 priority=5 gpgcheck=0 [e2fsprogs_latest] name=whamcloud_e2fsprogs baseurl=https://downloads.whamcloud.com/public/e2fsprogs/latest/el7/ enabled=1 priority=5 gpgcheck=0
Make sure yum-plugin-priorities is installed:
yum install yum-plugin-priorities
Install the Lustre server components on the MGS and OSS
rpm -e --nodeps kernel-firmware yum install -y \ kernel-*lustre \ kernel-firmware*lustre \ lustre-modules \ libss \ libcom_err \ e2fsprogs \ e2fsprogs-libs \ lustre-osd-ldiskfs \ lustre-osd-ldiskfs-mount \ lustre
Configure lnet and enable Lustre:
echo "options lnet networks=tcp0(eth0)">/etc/modprobe.d/lnet.conf chkconfig lnet --add chkconfig lnet on chkconfig lustre --add chkconfig lustre on
Reboot the machine
shutdown -r now
You should now have a virtual machine that is ready to be configured as a Lustre server component (MGS, MDT, OSS).
Install the Lustre client components
rpm -e --nodeps kernel-firmware yum install -y lustre-client-modules lustre-client \
Configure lnet and enable Lustre:
echo "options lnet networks=tcp0(eth0)">/etc/modprobe.d/lnet.conf chkconfig lnet --add chkconfig lnet on
Reboot the machine
shutdown -r now
You should now have a virtual machine that is ready to be configured as a Lustre server component (MGS, MDT, OSS).
Manually Configuring Lustre Cluster Machines
You need a minimum of three virtual machines. This means you'll have to repeat the "Creating a CentOS 7 KVM virtual machine from scratch" section once for each machine. Make sure each machine is given a unique hostname!
It is highly recommended that after creating the initial floppy disk image and kickstart files, that the virsh-install command be run once for each VM to create, modifying the hostname in the kickstart file each time before running virt install.
Manually Configure as MGS, MDT, and MDS
We'll be combining the first three Lustre server functions into the first virtual machine node. Start the vitual machine up and attach to either its serial console or through an ssh session.
To start a VM with serial console attached:
sudo virsh start lustre_mgs_mdt --console
Alternatively, ssh into the VM (kickstart sets all passwords to 'seagate')
ssh root@lustre_mgs_mdt
Configure the storage device:
mkfs.lustre --fsname=lustre1 --mgs --mdt --index=0 /dev/vdb
Manually Configure the OSS
Run the following command to create and format the OST
mkfs.lustre --mgsnode=lustre_mgs_mdt --fsname=lustre1 --ost --index=0 /dev/vdb
To force a re-format, do use the following:
mkfs.lustre --reformat --mgsnode=lustre_mgs_mdt --fsname=lustre1 --ost --index=0 /dev/vdb
Edit /etc/fstab and add the following line:
/dev/vdb /ossost lustre defaults,_netdev 0 0
Scripts and Configuration Files
These files can be used to automate installation of the base Lustre virtual machines. They should be side-by-side in the same directory where you want to create the virtual machines.
They are also available at https://github.com/suykerbuyk/lustre_kvm_quickstart.git
ks-base-centos7.cfg
#version=RHEL7 # System authorization information auth --enableshadow --passalgo=sha512 # Firewall configuration firewall --disabled # Use text mode install text # Install from web server url --url=https://vault.centos.org/7.9.2009/isos/x86_64/ # Disable Setup Agent on first boot firstboot --disable ignoredisk --only-use=vda # Keyboard layouts keyboard us # System language lang en_US.UTF-8 # SELinux configuration selinux --disabled # Network information network --bootproto=dhcp --device=eth0 --ipv6=auto --activate network --hostname=lustre_oss4 # Root password rootpw --iscrypted $6$3eZfHsDX5bymyRNZ$ypdN0oRnpXYwnpXJYb60vFTNZhmSsYuTinZthopCXS6PQW0KdpZFe0zeG.OvkJhIPWh2z.7qkLvgQU3BrNVZ.1 # Do not configure the X Window System skipx # System timezone timezone America/Denver --isUtc user --groups=wheel --name=dev --uid=1000 --password=$6$3eZfHsDX5bymyRNZ$ypdN0oRnpXYwnpXJYb60vFTNZhmSsYuTinZthopCXS6PQW0KdpZFe0zeG.OvkJhIPWh2z.7qkLvgQU3BrNVZ.1 --iscrypted --gecos="dev" # X Window System configuration information xconfig --startxonboot # Clear any existing partitioning zerombr # System bootloader configuration bootloader --append=" crashkernel=auto" --location=mbr --driveorder=vda autopart # Partition clearing information clearpart --all --initlabel --drives=vda #reboot at the end reboot %packages @base @console-internet @debugging @development @ftp-server @core @fonts @x11 ElectricFence asciidoc audit-libs-devel binutils-devel bison cmake coreutils elfutils-devel elfutils-libelf-devel ftp gdb-gdbserver gdisk git-all glibc-utils hmaccalc kernel-firmware kexec-tools libss libss-devel lftp mc mercurial mgetty mtools nasm newt-devel nfs-utils nmap openssh-clients openssh-server perl-ExtUtils-Embed python-devel python-docutils rpmdevtools rpmlint ruby-irb screen stunnel syslinux tree tuned tuned-utils xmlto yum-plugin-priorities zlib-devel zsh -atmel-firmware -b43-openfwwf -gcc-gfortran -iwl1000-firmware -iwl3945-firmware -iwl4965-firmware -iwl5000-firmware -iwl5150-firmware -iwl6000-firmware -iwl6050-firmware -libertas-usb8388-firmware -mysql-libs -rt61pci-firmware -rt73usb-firmware vim-enhanced -zd1211-firmware %end %post --log /root/post_install.log # enable passwordless sudo sed -i 's/# %wheel.*ALL=(ALL).*NOPASSWD:.*ALL/%wheel\tALL=(ALL)\tNOPASSWD: ALL/' /etc/sudoers # start sshd at boot chkconfig sshd on # Create install script for Lustre from wham cloud cat >/etc/yum.repos.d/whamcloud.repo<<'REPO_DEFINITION' [lustre_latest_el7_client] name=whamcloud_lustre_client baseurl=https://downloads.whamcloud.com/public/lustre/lustre-2.13.0/el7/client/ enabled=1 priority=5 gpgcheck=0 [lustre_latest_el7_server] name=whamcloud_lustre_server baseurl=https://downloads.whamcloud.com/public/lustre/lustre-2.13.0/el7/server/ enabled=1 priority=5 gpgcheck=0 [e2fsprogs_latest] name=whamcloud_e2fsprogs baseurl=https://downloads.whamcloud.com/public/e2fsprogs/latest/el7/ enabled=1 priority=5 gpgcheck=0 REPO_DEFINITION # Create a script in the root directory to install Lustre server components. cat >/root/install_lustre_server_from_whamcloud<<LUSTRE_SERVER_INSTALL_SCRIPT #!/bin/sh [ $(id -u) -ne 0 ] && echo "Please run as root" && exit 0 yum install -y yum-plugin-priorities rpm -e --nodeps kernel-firmware yum install -y \ kernel-*lustre \ kernel-firmware*lustre \ lustre-modules \ libss \ libcom_err \ e2fsprogs \ e2fsprogs-libs \ lustre-osd-ldiskfs \ lustre-osd-ldiskfs-mount \ lustre echo "options lnet networks=tcp0(eth0)">/etc/modprobe.d/lnet.conf chkconfig lnet --add chkconfig lnet on chkconfig lustre --add chkconfig lustre on echo Rebooting in 10 seconds - ctrl-c to abort sleep 10 reboot LUSTRE_SERVER_INSTALL_SCRIPT chmod a+x /root/install_lustre_server_from_whamcloud # Create a script in the root directory to install Lustre client components. cat >/root/install_lustre_client_from_wham_cloud<<LUSTRE_CLIENT_INSTALL_SCRIPT #!/bin/sh [ 0 -ne 0 ] && echo "Please run as root" && exit 0 yum install -y yum-plugin-priorities yum install -y lustre-client-modules lustre-client echo "options lnet networks=tcp0(eth0)">/etc/modprobe.d/lnet.conf chkconfig lnet --add chkconfig lnet on echo Rebooting in 10 seconds - ctrl-c to abort sleep 10 reboot LUSTRE_CLIENT_INSTALL_SCRIPT chmod a+x /root/install_lustre_client_from_wham_cloud # Create a script in the root directory to configure a server node as an mgs, mdt and mds cat >/root/configure_lustre_mgs_mdt_mds<<LUSTRE_MGS_MDT_MDS_CONFIG #!/bin/sh mkfs.lustre --fsname=lustre1 --mgs --mdt --index=0 /dev/vdb || exit 1 mkdir /mgsmdt || exit 1 echo "/dev/vdb /mgsmdt lustre defaults_netdev 0 0">>/etc/fstab || exit 1 mount -a LUSTRE_MGS_MDT_MDS_CONFIG chmod a+x /root/configure_lustre_mgs_mdt_mds # Create a script in the root directory to configure a server node as an oss cat >/root/configure_lustre_oss<<LUSTRE_OSS_CONFIG #!/bin/sh mkfs.lustre --fsname=lustre1 --ost --mgsnode=lustre_mgs_mdt@tcp0 --index=0 /dev/vdb || exit 1 mkdir /ossost echo "/dev/vdb /ossost lustre defaults_netdev 0 0">>/etc/fstab || exit 1 mount -a LUSTRE_OSS_CONFIG chmod a+x /root/configure_lustre_oss # Create a script in the root directory to configure a server node as an oss cat >/root/configure_lustre_client<<LUSTRE_CLIENT_CONFIG #!/bin/sh [ ! -d /lustre ] && mkdir /lustre >/dev/null sed -i '/lustre_mgs_mdt/d' /etc/fstab echo "lustre_mgs_mdt@tcp0:/lustre1 /lustre lustre defaults,_netdev 0 0">>/etc/fstab mount -a LUSTRE_CLIENT_CONFIG chmod a+x /root/configure_lustre_client #Copy SSH keys for easy access. mkdir -p /root/.ssh cat >/root/.ssh/authorized_keys<<SSH_AUTHORIZATION ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC9JUPjH/NuGDOwTn3c2NGHmnmHsUdZha/xYxPFuLzN9csYaywfhPa9vg6HhuJiNp7t9DYzTZc2+B3KL02JIPIqhkvdXCPy9wxzHb5u/yXPS5/USxdCYML+xsR5EcEPa/tl05R4kaK5ErivGxZWpsTnW2WsxVys/NekSJOVtj5rpbhkNE5qM+McTK30rEwUgU3JZ+4EI+FVk5pBlT+A2kuqpGTqvj33S3Z8VrPueOy8fWXpn2jhYb7qZFFBI84apM5BtjSCjGdSYKB2uI6WAUlL5shCpKKr76bwumMag4rPGa46u4CiWO7ov3c1nl5g2WPLx7QWf9xly/rFcoEup9Th SSH_AUTHORIZATION chmod 700 /root/.ssh chmod 600 /root/.ssh/* %end
create.centos7vm.sh
Copy and save this script to the same location where you saved the ks-base-centos7.cfg file. Make sure you set it as executable:
chmod a+x create.centos7vm.sh
#!/bin/sh KICKSTART_FILE="$PWD/ks-base-centos7.cfg" BOOTSTRAP_URL="https://vault.centos.org/7.9.2009/isos/x86_64/" INSTALL_URL="https://vault.centos.org/7.9.2009/isos/x86_64/" VM_NAME=$1 # Some default data disk sizes based on role (in gigabytes). DEFAULT_DATA_DISK_SIZE=1 MDT_DATA_DISK_SIZE=$DEFAULT_DATA_DISK_SIZE MDS_DATA_DISK_SIZE=$DEFAULT_DATA_DISK_SIZE OSS_DATA_DISK_SIZE=15 DATA_DISK_SIZE=$DEFAULT_DATA_DISK_SIZE SYS_DISK_SIZE=15 # Common exit point for the script die_badly() { echo "Something went wrong..." exit 1 } # Checks to see if a directory is mounted is_mounted() { mount | grep "$1" >/dev/null [ $? -eq 0 ] && echo 1 && return 1 echo 0 && return 0 } # Remove any existing floppy image artifacts. clean_up_floppy_image() { if [ -d "$FLOPPY_DISK_IMAGE_DIR" ] then if [ $(is_mounted "$FLOPPY_DISK_IMAGE_DIR") -eq 1 ] then echo Unmounting $FLOPPY_DISK_IMAGE_DIR sudo umount "$FLOPPY_DISK_IMAGE_DIR" || die_badly fi rm -rf "$FLOPPY_DISK_IMAGE_DIR" || die_badly fi if [ -f "$FLOPPY_DISK_IMAGE_FILE" ] then rm -rf "$FLOPPY_DISK_IMAGE_FILE" || die_badly fi } # Create the floppy image to hold our kickstart file create_floppy_image() { mkdir -p "$FLOPPY_DISK_IMAGE_DIR" || die_badly qemu-img create -f raw "$FLOPPY_DISK_IMAGE_FILE" 1440k || die_badly mkfs.msdos -s 1 "$FLOPPY_DISK_IMAGE_FILE" || die_badly sudo mount -o loop,uid=$CURRENT_UID,gid=$CURRENT_GID \ "$FLOPPY_DISK_IMAGE_FILE" \ "$FLOPPY_DISK_IMAGE_DIR" \ || die_badly } # Verify we have our kickstart file if [ ! -f "$KICKSTART_FILE" ] then echo "Missing Kickstart file: $KICKSTART_FILE" die_badly fi if [ $# == 0 ] then echo -n "Enter a name for the new Centos 7 VM: " read VM_NAME fi DISK_PATH="$PWD/disk" FLOPPY_DISK_IMAGE_FILE="$DISK_PATH/$VM_NAME-floppy.img" FLOPPY_DISK_IMAGE_DIR="$DISK_PATH/$VM_NAME-floppy.dir" CURRENT_UID=`id -u` CURRENT_GID=`id -g` echo "Creating new VM with the name: $VM_NAME" echo "Preparing kickstart floppy disk image, root permissions will be required." # Give this VM the same network name as its VM Name. sed -i "s/network.*--hostname=.*/network --hostname=$VM_NAME/g" ks-base-centos7.cfg if [ ! -d "$DISK_PATH" ] then mkdir -p "DISK_PATH" || die_badly fi clean_up_floppy_image create_floppy_image cp ks-base-centos7.cfg "$FLOPPY_DISK_IMAGE_DIR/ks.cfg" || die_badly sudo umount "$FLOPPY_DISK_IMAGE_DIR" rm -rf "$FLOPPY_DISK_IMAGE_DIR" case "$VM_NAME" in *oss*) echo Configuring for OSS DATA_DISK_SIZE=$OSS_DATA_DISK_SIZE ;; *mdt*) echo Configuring for MDT DATA_DISK_SIZE=$MDS_DATA_DISK_SIZE ;; *mgs*) echo Configuring for MGS DATA_DISK_SIZE=$MDS_DATA_DISK_SIZE ;; *ost*) echo Configuring for OST DATA_DISK_SIZE=$OSS_DATA_DISK_SIZE ;; *client*) echo Configuring for client DATA_DISK_SIZE=$DEFAULT_DATA_DISK_SIZE ;; *) echo Don\'t know what to do with $1 DATA_DISK_SIZE=$DEFAULT_DATA_DISK_SIZE ;; esac sudo virt-install \ --connect qemu:///system \ --name $VM_NAME \ --virt-type=kvm \ --memory 2048 \ --vcpus=1 \ --disk device=floppy,path="$FLOPPY_DISK_IMAGE_FILE" \ --disk device=disk,path="$DISK_PATH/$VM_NAME-sys.raw",size=$SYS_DISK_SIZE,format=raw \ --disk device=disk,path="$DISK_PATH/$VM_NAME-data.raw",size=$DATA_DISK_SIZE,format=raw \ --os-variant rhel7.9 \ --location "$BOOTSTRAP_URL" \ --noautoconsole \ --graphics vnc,listen=0.0.0.0 \ --accelerate \ --network=bridge:br0 \ --network=bridge:br0 \ --extra-args=="console=tty0 console=ttyS0,115200 ksdevice=eth0 ks=hd:fd0:/ks.cfg" \ --hvm || exit 1 sudo virsh console $VM_NAME echo "Use:" echo " sudo virsh start $VM_NAME --console" echo "To start the $VM_NAME, or the following to attach to a running instance:" echo " sudo virsh console $VM_NAME" echo "To force a shutdown of $VM_NAME:" echo " sudo virsh destroy $VM_NAME" echo "And to completely remove it from this host:" echo " sudo virsh undefine $VM_NAME --remove-all-storage"