KVM Quick Start Guide

= Introduction =

This guide was originally provided by John Suykerbuyk and Seagate Technology in support of growing the Lustre community.

Lustre combines a custom in-kernel network stack called Lustre Networking (LNET) along with its own object block devices, which are made up of local file systems, such as ldiskfs or ZFS. Together, these components make up the storage framework used to create a clustered file system.

Like any file on any other file system, a single file is made up of several component parts. The file and directory structure that indexes the file (metadata), and data blocks (objects) that comprise the data within the file. The file system has to keep track of where those objects are located, who and what can access those objects and with what permissions (read, write, and execute). This extra information that describes a user's file is referred to as metadata.

Most conventional file systems combine file data blocks and their metadata on a single, continuous block storage device. The performance and scalability of Lustre comes from a design that aggregates many storage servers through dedicated metadata and object servers to form a single file system that is a superset of its component storage nodes. Clients communicate through LNET protocol to metadata servers and establish direct connections to file data stored on one or more object storage servers. This enables, not only the ability to parallelize a single clients access to file system data across multiple servers, but also to service many clients concurrently across a virtualized file system that goes beyond conventional boundaries of size, capacity, and throughput.

Lustre scales for capacity, redundancy, and concurrency by arranging and adding storage components. The metadata servers keep track of object locations, while facilitating client-to-storage connections and arbitration of access to file system data.

= What are the parts of a Lustre Cluster? = In the descriptions that follow, there is often a symmetry in which each particular component of a Lustre cluster is broken up into a server and one or more targets. Server components communicate to other Lustre components via LNET, while the associated targets function as storage for the service or server. Targets can present themselves as wide range of block storage devices from directly attached disk drives and RAID arrays to other network technologies.

It is important to remember that the server provides access to, and abstracts, the underlying target that contains the data.

Management Server (MGS)
The MGS is the top level entry and access point through which client-visible Lustre operations take place. Because Lustre delegates specific file system storage and management to task-specific networked nodes, the MGS serves as the primary configuration point for all nodes on the file system.

The MGS also services as a facilitator for the imperative recovery (IR) feature.

Metadata Server (MDS)
An MDS communicates the state and location of all stored entities in a Lustre file system. It answers client requests and tracks locations and state of data stored throughout the cluster.

An MDS responds to client requests for metadata information, and tracks and records that information. A Lustre file system can have multiple metadata servers. The number of clients, service requirements of the internal cluster, and performance requirements of the deployment determine the number of MDSs you need to install.

Metadata Target (MDT)
The MDT is the primary block storage device used to store all file system metadata. One or more metadata targets can exist on a Lustre file system. When multiple MDTs are present, they are aggregated by the MDS to support the storage needs to track the metadata for the entire cluster. The bigger the Lustre file system, the larger the total storage space of the combined MDTs will need to be.

Object Storage Server (OSS)
The OSS handles the actual file data I/O request. It is the primary interface into the actual backing storage on each of the storage device nodes that make up a Lustre cluster.

Object Storage Target (OST)
One or more object storage targets make up the physical data storage served up by the OSSs. An OSS may use one or more OSTs to store data objects that make up a single Lustre file in a striped or non-striped configuration across one or multiple OSTs.

Lustre Client
A Lustre client is the end user node of a Lustre file system and a Lustre cluster can support thousands of clients at the same time.

The client uses its LNET connections to the MGS, MDS, and OSSs to enable mounting the composite storage of the cluster as a single POSIX compatible file system on a local mount point.

Internally, the client uses the MGS as the authoritative source of global configuration data including connection information to the MDS and OSSs. The MDS provides clients with file system metadata and object storage locations to and within the OSSs. Each OSS hosts one or more OSTs which in turn, store the objects that make up file data of interest to the end user. =Lustre Requirements = Lustre is only supported on Linux. Because Lustre implementation involves modifications to the Linux kernel, prebuilt packages are only available for a handful of kernels, which are compatible with Red Hat and Ubuntu. Nevertheless, several Linux distributions build their own Lustre packages. Because most feature development takes place on Red Hat compatible distributions, this document focuses on working with a RHEL compatible distribution.

You can group a number of Lustre file system components together on a single compute node, depending on performance, capacity, and redundancy requirements. You can consolidate nodes by combining the MGS and MDS onto a single node. You can also configure an OSS to host one or more of its own OSTs. It is important, however, that you never combine OSS and MDS on a single node because of concurrency issues.

= Virtualization and Operating System Selection = This guide uses KVM virtualization to create a very simple Lustre cluster. It also assumes that you have a working installation of qemu-kvm-1.5, virt-install, and qemu-kvm-tools-1.5 or later running on a host machine running RHEL-compatible Linux. Running a version of Linux that is compatible with RHEL 6 or RHEL 7 provides the best development platform for the purposes of this document. This document uses CentOS 6, a Red Hat Enterprise compatible distribution, to install Lustre.

For the purpose of this document, we will be using CentOS 6, a Red Hat Enterprise compatible distribution to install Lustre on.

The repository for installation is available here:
 * http://isoredirect.centos.org/centos/6/isos/x86_64/

The repository for installation is available here:
 * http://mirror.centos.org/centos/6/os/x86_64/

This document discusses and demonstrates the steps involved in installing Lustre, and provides bash-compatible scripts to use as reference for a semi-automated setup.

= Creating The CentOS 6 KVM Virtual Machines = Installing an operating system involves a bootstrapping process where first an operating system is started from installation media that runs entirely in RAM. It is then used to install the "real" operating system on the target machine. Though there are LOTS of ways to do this, this guide will be doing everything over simple HTTP (web) protocols with no physical media except a floppy disk image we create that will hold the installation instructions.

You need root access on the machine used to host the Lustre virtual machines.


 * HINT: To avoid running as root or constantly providing passwords, add yourself to the group "wheel" and modify /etc/sudoers file to enable passwordless sudo.

sudo usermod YOUR_USER_NAME -G wheel sudo visudo
 * uncomment (remove prefixed hash mark)

# %wheel ALL=(ALL)       NOPASSWD: ALL
 * which becomes:

%wheel ALL=(ALL)       NOPASSWD: ALL

Before proceeding, create a directory to host the VM files. mkdir -p ~/lustre_vms cd ~/lustre_vms

= Installing CentOS 6 for Lustre under KVM with the helper scripts = Save the create.centos6vm.sh script and the ks-base-centos.cfg file to your lustre_vms directory.

When create.centos6vm.sh is run side-by-side with the ks-base-centos.cfg kickstart file, they will create a generic virtual machines ready to be configured either as a Lustre Client or Server node.

For each kind of the three VMs, run once with the name of the VM as the first parameter ./create.centos6vm.sh lustre_client1 ./create.centos6vm.sh lustre_mgs_mds ./create.centos6vm.sh lustre_oss1

Once each VM is created, configure them for their roles. If you've followed the guide up to this point, you should have three virtual machines based on CentOS 6 configured and setup via the the automated kickstart file. The kickstart file should have created a number of files in /root: Connect to each of the running vm's and run the install appropriate install scripts: Connect to each of the running vm's and configure their roles. The order is important! Configure the mgs/mds first, the OSS next, and finally a client:
 * install_lustre_client_from_wham_cloud
 * install_lustre_server_from_whamcloud
 * configure_lustre_client
 * configure_lustre_mgs_mdt_mds
 * configure_lustre_oss
 * sudo virsh console lustre_client1 (log on as root/seagate)
 * ./install_lustre_client_from_whamcloud
 * reboot
 * sudo virsh console lustre_mgs_mds (log on as root/seagate)
 * ./install_lustre_server_from_whamcloud
 * reboot
 * sudo virsh console lustre_oss1 (log on as root/seagate)
 * ./install_lustre_server_from_whamcloud
 * reboot
 * sudo virsh console lustre_mgs_mds (log on as root/seagate)
 * ./configure_lustre_mgs_mdt_mds
 * sudo virsh console lustre_oss1 (log on as root/seagate)
 * ./configure_lustre_oss
 * sudo virsh console lustre_client1
 * ./configure_lustre_client

Installing CentOS 6 for Lustre under KVM with the without helper scripts
This section can be skipped entirely if the helper scripts are used. They are intended solely to provide a deeper understanding of the steps involved.

Create the floppy disk image
This procedure uses a floppy disk image connected to the virtual machine to provide the setup and configuration instructions needed for installation.

Navigate to the directory where you want your virtual machine files to be stored, and then enter the following commands:

mkdir -p floppy.dir chmod 777 floppy.dir qemu-img create -f raw floppy.img 1440k mkfs.msdos -s 1 floppy.img sudo mount -o loop, floppy.img floppy.dir

If everything completes without error, you should have a file, "floppy.img" whose contents are accessible in the directory "floppy.dir".


 * HINT: If qemu-img is not installed, then attempting to create the floppy image generates a error. To install qemu-img, enter the following command:

yum install -y qemu-img

The Kickstart file
The Kickstart file is used by the Anaconda system built into Red Hat Enterprise compatible Linux distributions to install and configure the installation of Lustre's Linux operating system. You can find the full text of the kickstart file in the Scripts and Configuration Files section at the end of this document. Copy the configuration code into a file named ks-base-centos6.cfg in the floppy.dir directory.

The only thing that you must change is the hostname field in the line: network --hostname=HOSTNAME*

When executed by anaconda, the kickstart does the following:
 * Sets the hostname (Set a unique hostname for each virtual machine you create.)
 * Configures the virtual machine to bridge to the host eth0 network interface.
 * If your primary network connection on the host is not through eth0, then you must modify this line with the correct interface.
 * Configures the virtual machine’s network interface to use DHCP.
 * If you do not have a DHCP server, this line should be modified to use a static IP configuration.
 * Sets the time zone to Denver Colorado
 * You can find other valid locations in /usr/share/zoneinfo.
 * Boot straps the install from the CentOS mirror repository for version 6.6
 * Disables SELINUX (Secure Linux Features).
 * Lustre on SELINUX is not entirely straight forward to configure.
 * Creates a user named '"dev"', makes the user part of the wheel group.
 * Modifies the /etc/sudoers file to allow passwordless sudo to root.
 * Sets the password for both the 'dev' and 'root' users to "seagate"
 * Performs a minimal install of CentOS 6.6 while adding the packages needed to develop and debug Lustre.
 * Creates a new repository entry for YUM that takes precedence over all others, from which to install Lustre.
 * Creates a several scripts in the root user's home directory to install and configure Lustre server and client nodes.

Manual Install of CentOS 6 under KVM
After completing the previous step, a directory called floppy.dir exists and contains the ks-base-centos6.cfg configuration file. The floppy.dir directory is also the mount point for the floppy.img disk image.

The following procedure creates three virtual machines:
 * A client
 * A combination MGS, MDS, and MDT
 * An OSS with a single OST

Before configuring the virtualization settings and initiating the kickstart-based installer, you must change both the name of the VM in the virt-install command and the hostname in the kickstart file for each virtual machine. There are three named VMs, each named as follows:
 * lustre_mgs_mdt
 * lustre_oss1
 * lustre_client

It is highly recommended that. for each named VM, you do the following:
 * 1) Edit the kickstart file (ks-base-centos6.cfg) in the floppy.dir directory to set the hostname line.
 * 1) Export the virtual machine to the shell using the following command:

sudo virsh console (name of VM)
 * 1) Create the virtual machine by entering the following command:
 * sudo virt-install \
 * --connect qemu:///system \
 * --name "$VM_NAME" \
 * --virt-type=kvm \
 * --memory 2048 \
 * --vcpus=1 \
 * --disk device=floppy,path="$PWD/floppy.img" \
 * --disk device=disk,path="$PWD/$VM_NAME-sys.raw",size=15,format=raw \
 * --disk device=disk,path="$PWD/$VM_NAME-data.raw",size=1,format=raw \
 * --os-variant rhel6.6 \
 * --location http://mirror.centos.org/centos/6.6/os/x86_64/ \
 * --noautoconsole \
 * --graphics vnc,listen=0.0.0.0 \
 * --accelerate \
 * --network=bridge:br0 \
 * --extra-args=="console=tty0 console=ttyS0,115200 ksdevice=eth0 ks=hd:fd0:/ks.cfg" \
 * --hvm
 * 1) Enter the following command to observe and monitor the boot and installation:


 * 'NOTE:' If virt-install is not installed, then an error appears notifying you that it is not a valid command. To install virt-install, enter the following command:

sudo yum install -y virt-install Immediately after the previous (long) command, enter the following to observe and monitor the boot and installation: sudo virsh console (name of VM)

Troubleshooting: * If you get an error about virt-install not being a valid command: - sudo yum install -y virt-install * Likewise, if "virsh" is not found: - sudo yum install -y libvirt libvirt-client libvirt-daemon libvirt-docs

With a bit of luck, you should now have a running virtual machine that is installing itself completely from the web. At the end of the install, it will reboot, but the reboot might fail to restart. You can manually start the virtual machine and re-attach the serial console with the following command: sudo virt start (name of VM) --console

Manual Install of Lustre Components
The following details how to manually install Lustre without the script or having used the kickstart file.

First, create the yum repo file in: /etc/yum.repos.d/whamcloud.repo

Populate it with the following information:

[lustre_latest_el6_client] name=whamcloud_lustre_client baseurl=https://downloads.hpdd.intel.com/public/lustre/latest-feature-release/el6/client/ enabled=1 priority=5 gpgcheck=0 [lustre_latest_el6_server] name=whamcloud_lustre_server baseurl=https://downloads.hpdd.intel.com/public/lustre/latest-feature-release/el6/server/ enabled=1 priority=5 gpgcheck=0 [e2fsprogs_latest] name=whamcloud_e2fsprogs baseurl=https://downloads.hpdd.intel.com/public/e2fsprogs/latest/el6/RPMS/ enabled=1 priority=5 gpgcheck=0 Make sure yum-plugin-priorities is installed: yum install yum-plugin-priorities

Install the Lustre server components on the MGS and OSS
rpm -e --nodeps kernel-firmware yum install -y \ kernel-*lustre \ kernel-firmware*lustre \ lustre-modules \ libss \ libcom_err \ e2fsprogs \ e2fsprogs-libs \ lustre-osd-ldiskfs \ lustre-osd-ldiskfs-mount \ lustre Configure lnet and enable Lustre: echo "options lnet networks=tcp0(eth0)">/etc/modprobe.d/lnet.conf chkconfig lnet --add chkconfig lnet on 	chkconfig lustre --add chkconfig lustre on Reboot the machine shutdown -r now You should now have a virtual machine that is ready to be configured as a Lustre server component (MGS, MDT, OSS).

Install the Lustre client components
rpm -e --nodeps kernel-firmware yum install -y lustre-client-modules lustre-client \

Configure lnet and enable Lustre: echo "options lnet networks=tcp0(eth0)">/etc/modprobe.d/lnet.conf chkconfig lnet --add chkconfig lnet on

Reboot the machine shutdown -r now

You should now have a virtual machine that is ready to be configured as a Lustre server component (MGS, MDT, OSS).

Manually Configuring Lustre Cluster Machines
You need a minimum of three virtual machines. This means you'll have to repeat the "Creating a CentOS 6 KVM virtual machine from scratch" section once for each machine. Make sure each machine is given a unique hostname!

It is highly recommended that after creating the initial floppy disk image and kickstart files, that the virsh-install command be run once for each VM to create, modifying the hostname in the kickstart file each time before running virt install.

Manually Configure as MGS, MDT, and MDS
We'll be combining the first three Lustre server functions into the first virtual machine node. Start the vitual machine up and attach to either its serial console or through an ssh session.

To start a VM with serial console attached: sudo virsh start lustre_mgs_mdt --console

Alternatively, ssh into the VM (kickstart sets all passwords to 'seagate') ssh root@lustre_mgs_mdt

Configure the storage device: mkfs.lustre --fsname=lustre1 --mgs --mdt --index=0 /dev/vdb

Manually Configure the OSS
Run the following command to create and format the OST mkfs.lustre --mgsnode=lustre_mgs_mdt --fsname=lustre1 --ost --index=0 /dev/vdb To force a re-format, do use the following: mkfs.lustre --reformat --mgsnode=lustre_mgs_mdt --fsname=lustre1 --ost --index=0 /dev/vdb Edit /etc/fstab and add the following line: /dev/vdb       /ossost         lustre defaults,_netdev 0 0

= Scripts and Configuration Files = These files can be used to automate installation of the base Lustre virtual machines. They should be side-by-side in the same directory where you want to create the virtual machines.

They are also available at https://github.com/suykerbuyk/lustre_kvm_quickstart.git

ks-base-centos6.cfg
auth --enableshadow --passalgo=sha512 firewall --disabled text url --url=http://mirror.centos.org/centos/6.6/os/x86_64/ firstboot --disable ignoredisk --only-use=vda keyboard us lang en_US.UTF-8 selinux --disabled network --bootproto=dhcp --device=eth0 --ipv6=auto --activate network --hostname=lustre_oss4 rootpw --iscrypted $6$3eZfHsDX5bymyRNZ$ypdN0oRnpXYwnpXJYb60vFTNZhmSsYuTinZthopCXS6PQW0KdpZFe0zeG.OvkJhIPWh2z.7qkLvgQU3BrNVZ.1 skipx timezone America/Denver --isUtc user --groups=wheel --name=dev --uid=1000 --password=$6$3eZfHsDX5bymyRNZ$ypdN0oRnpXYwnpXJYb60vFTNZhmSsYuTinZthopCXS6PQW0KdpZFe0zeG.OvkJhIPWh2z.7qkLvgQU3BrNVZ.1 --iscrypted --gecos="dev" xconfig --startxonboot zerombr bootloader --append=" crashkernel=auto" --location=mbr --driveorder=vda autopart clearpart --all --initlabel --drives=vda reboot %packages @base @console-internet @debugging @development @ftp-server @core @fonts @x11 ElectricFence asciidoc audit-libs-devel binutils-devel bison cmake coreutils elfutils-devel elfutils-libelf-devel ftp gdb-gdbserver gdisk git-all glibc-utils hmaccalc kernel-firmware kexec-tools libss libss-devel lftp mc mercurial mgetty mtools nasm newt-devel nfs-utils nmap openssh-clients openssh-server perl-ExtUtils-Embed python-devel python-docutils rpmdevtools rpmlint ruby-irb screen stunnel syslinux tree tuned tuned-utils xmlto yum-plugin-priorities zlib-devel zsh -atmel-firmware -b43-openfwwf -gcc-gfortran -iwl1000-firmware -iwl3945-firmware -iwl4965-firmware -iwl5000-firmware -iwl5150-firmware -iwl6000-firmware -iwl6050-firmware -libertas-usb8388-firmware -mysql-libs -rt61pci-firmware -rt73usb-firmware vim-enhanced -zd1211-firmware %end %post --log /root/post_install.log sed -i 's/# %wheel.*ALL=(ALL).*NOPASSWD:.*ALL/%wheel\tALL=(ALL)\tNOPASSWD: ALL/' /etc/sudoers chkconfig sshd on cat >/etc/yum.repos.d/whamcloud.repo<<'REPO_DEFINITION' [lustre_latest_el6_client] name=whamcloud_lustre_client baseurl=https://downloads.hpdd.intel.com/public/lustre/latest-feature-release/el6/client/ enabled=1 priority=5 gpgcheck=0 [lustre_latest_el6_server] name=whamcloud_lustre_server baseurl=https://downloads.hpdd.intel.com/public/lustre/latest-feature-release/el6/server/ enabled=1 priority=5 gpgcheck=0 [e2fsprogs_latest] name=whamcloud_e2fsprogs baseurl=https://downloads.hpdd.intel.com/public/e2fsprogs/latest/el6/RPMS/ enabled=1 priority=5 gpgcheck=0 REPO_DEFINITION cat >/root/install_lustre_server_from_whamcloud</etc/modprobe.d/lnet.conf chkconfig lnet --add chkconfig lnet on chkconfig lustre --add chkconfig lustre on echo Rebooting in 10 seconds - ctrl-c to abort sleep 10 reboot LUSTRE_SERVER_INSTALL_SCRIPT chmod a+x /root/install_lustre_server_from_whamcloud cat >/root/install_lustre_client_from_wham_cloud</etc/modprobe.d/lnet.conf chkconfig lnet --add chkconfig lnet on echo Rebooting in 10 seconds - ctrl-c to abort sleep 10 reboot LUSTRE_CLIENT_INSTALL_SCRIPT chmod a+x /root/install_lustre_client_from_wham_cloud cat >/root/configure_lustre_mgs_mdt_mds<>/etc/fstab || exit 1 mount -a LUSTRE_MGS_MDT_MDS_CONFIG chmod a+x /root/configure_lustre_mgs_mdt_mds cat >/root/configure_lustre_oss<>/etc/fstab || exit 1 mount -a LUSTRE_OSS_CONFIG chmod a+x /root/configure_lustre_oss cat >/root/configure_lustre_client</dev/null sed -i '/lustre_mgs_mdt/d' /etc/fstab echo "lustre_mgs_mdt@tcp0:/lustre1 /lustre	lustre defaults,_netdev 0 0">>/etc/fstab mount -a LUSTRE_CLIENT_CONFIG chmod a+x /root/configure_lustre_client mkdir -p /root/.ssh cat >/root/.ssh/authorized_keys<<SSH_AUTHORIZATION ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC9JUPjH/NuGDOwTn3c2NGHmnmHsUdZha/xYxPFuLzN9csYaywfhPa9vg6HhuJiNp7t9DYzTZc2+B3KL02JIPIqhkvdXCPy9wxzHb5u/yXPS5/USxdCYML+xsR5EcEPa/tl05R4kaK5ErivGxZWpsTnW2WsxVys/NekSJOVtj5rpbhkNE5qM+McTK30rEwUgU3JZ+4EI+FVk5pBlT+A2kuqpGTqvj33S3Z8VrPueOy8fWXpn2jhYb7qZFFBI84apM5BtjSCjGdSYKB2uI6WAUlL5shCpKKr76bwumMag4rPGa46u4CiWO7ov3c1nl5g2WPLx7QWf9xly/rFcoEup9Th SSH_AUTHORIZATION chmod 700 /root/.ssh chmod 600 /root/.ssh/* %end
 * 1) version=RHEL6
 * 2) System authorization information
 * 1) Firewall configuration
 * 1) Use text mode install
 * 1) Install from web server
 * 1) Disable Setup Agent on first boot
 * 1) Keyboard layouts
 * 1) System language
 * 1) SELinux configuration
 * 1) Network information
 * 1) Root password
 * 1) Do not configure the X Window System
 * 1) System timezone
 * 1) X Window System configuration information
 * 1) Clear any existing partitioning
 * 1) System bootloader configuration
 * 1) Partition clearing information
 * 1) reboot at the end
 * 1) enable passwordless sudo
 * 1) start sshd at boot
 * 1) Create install script for Lustre from wham cloud
 * 1) Create a script in the root directory to install Lustre server components.
 * 1) !/bin/sh
 * 1) Create a script in the root directory to install Lustre client components.
 * 1) !/bin/sh
 * 1) Create a script in the root directory to configure a server node as an mgs, mdt and mds
 * 1) !/bin/sh
 * 1) Create a script in the root directory to configure a server node as an oss
 * 1) !/bin/sh
 * 1) Create a script in the root directory to configure a server node as an oss
 * 1) !/bin/sh
 * 1) Copy SSH keys for easy access.

create.centos6vm.sh
Copy and save this script to the same location where you saved the ks-base-centos6.cfg file. Make sure you set it as executable: chmod a+x create.centos6vm.sh

KICKSTART_FILE="$PWD/ks-base-centos6.cfg" BOOTSTRAP_URL="http://mirror.centos.org/centos/6.6/os/x86_64/" INSTALL_URL="http://mirror.centos.org/centos/6.6/os/x86_64/" VM_NAME=$1 DEFAULT_DATA_DISK_SIZE=1 MDT_DATA_DISK_SIZE=$DEFAULT_DATA_DISK_SIZE MDS_DATA_DISK_SIZE=$DEFAULT_DATA_DISK_SIZE OSS_DATA_DISK_SIZE=15 DATA_DISK_SIZE=$DEFAULT_DATA_DISK_SIZE SYS_DISK_SIZE=15 die_badly { echo "Something went wrong..." exit 1 } is_mounted { mount | grep "$1" >/dev/null [ $? -eq 0 ] && echo 1 && return 1 echo 0 && return 0 } clean_up_floppy_image { if [ -d "$FLOPPY_DISK_IMAGE_DIR" ] then if [ $(is_mounted "$FLOPPY_DISK_IMAGE_DIR") -eq 1 ] then echo Unmounting $FLOPPY_DISK_IMAGE_DIR sudo umount "$FLOPPY_DISK_IMAGE_DIR" || die_badly fi rm -rf "$FLOPPY_DISK_IMAGE_DIR" || die_badly fi if [ -f "$FLOPPY_DISK_IMAGE_FILE" ] then rm -rf "$FLOPPY_DISK_IMAGE_FILE" || die_badly fi } create_floppy_image { mkdir -p "$FLOPPY_DISK_IMAGE_DIR" || die_badly qemu-img create -f raw "$FLOPPY_DISK_IMAGE_FILE" 1440k || die_badly mkfs.msdos -s 1 "$FLOPPY_DISK_IMAGE_FILE" || die_badly sudo mount -o loop,uid=$CURRENT_UID,gid=$CURRENT_GID \ "$FLOPPY_DISK_IMAGE_FILE" \ "$FLOPPY_DISK_IMAGE_DIR" \ || die_badly } if [ ! -f "$KICKSTART_FILE" ] then echo "Missing Kickstart file: $KICKSTART_FILE" die_badly fi if [ $# == 0 ] then echo -n "Enter a name for the new Centos 6 VM: " read VM_NAME fi DISK_PATH="$PWD/disk" FLOPPY_DISK_IMAGE_FILE="$DISK_PATH/$VM_NAME-floppy.img" FLOPPY_DISK_IMAGE_DIR="$DISK_PATH/$VM_NAME-floppy.dir" CURRENT_UID=`id -u` CURRENT_GID=`id -g` echo "Creating new VM with the name: $VM_NAME" echo "Preparing kickstart floppy disk image, root permissions will be required." sed -i "s/network.*--hostname=.*/network --hostname=$VM_NAME/g" ks-base-centos6.cfg if [ ! -d "$DISK_PATH" ] then mkdir -p "DISK_PATH" || die_badly fi clean_up_floppy_image create_floppy_image cp ks-base-centos6.cfg "$FLOPPY_DISK_IMAGE_DIR/ks.cfg" || die_badly sudo umount "$FLOPPY_DISK_IMAGE_DIR" rm -rf "$FLOPPY_DISK_IMAGE_DIR" case "$VM_NAME" in 	*oss*) 		echo Configuring for OSS 		DATA_DISK_SIZE=$OSS_DATA_DISK_SIZE 		;; 	*mdt*) echo Configuring for MDT DATA_DISK_SIZE=$MDS_DATA_DISK_SIZE ;; 	*mgs*) 		echo Configuring for MGS 		DATA_DISK_SIZE=$MDS_DATA_DISK_SIZE 		;; 	*ost*) echo Configuring for OST DATA_DISK_SIZE=$OSS_DATA_DISK_SIZE ;; 	*client*) 		echo Configuring for client 		DATA_DISK_SIZE=$DEFAULT_DATA_DISK_SIZE 		;; 	*) echo Don\'t know what to do with $1 DATA_DISK_SIZE=$DEFAULT_DATA_DISK_SIZE ;; esac sudo virt-install \ --connect qemu:///system \ --name $VM_NAME \ --virt-type=kvm \ --memory 2048 \ --vcpus=1 \ --disk device=floppy,path="$FLOPPY_DISK_IMAGE_FILE" \ --disk device=disk,path="$DISK_PATH/$VM_NAME-sys.raw",size=$SYS_DISK_SIZE,format=raw \ --disk device=disk,path="$DISK_PATH/$VM_NAME-data.raw",size=$DATA_DISK_SIZE,format=raw \ --os-variant rhel6.6 \ --location "$BOOTSTRAP_URL" \ --noautoconsole \ --graphics vnc,listen=0.0.0.0 \ --accelerate \ --network=bridge:br0 \ --network=bridge:br0 \ --extra-args=="console=tty0 console=ttyS0,115200 ksdevice=eth0 ks=hd:fd0:/ks.cfg" \ --hvm || exit 1 sudo virsh console $VM_NAME echo "Use:" echo " sudo virsh start $VM_NAME --console" echo "To start the $VM_NAME, or the following to attach to a running instance:" echo " sudo virsh console $VM_NAME" echo "To force a shutdown of $VM_NAME:" echo " sudo virsh destroy $VM_NAME" echo "And to completely remove it from this host:" echo " sudo virsh undefine $VM_NAME --remove-all-storage"
 * 1) !/bin/sh
 * 1) Some default data disk sizes based on role (in gigabytes).
 * 1) Common exit point for the script
 * 1) Checks to see if a directory is mounted
 * 1) Remove any existing floppy image artifacts.
 * 1) Create the floppy image to hold our kickstart file
 * 1) Verify we have our kickstart file
 * 1) Give this VM the same network name as its VM Name.