Lustre with ZFS Install: Difference between revisions
Jump to navigation
Jump to search
No edit summary |
No edit summary |
||
Line 66: | Line 66: | ||
##Create empty folder for mountpoint, and mount file system (e.g., mkdir /cove; mount /cove). | ##Create empty folder for mountpoint, and mount file system (e.g., mkdir /cove; mount /cove). | ||
[[Category: | [[Category:Howto]][[Category:ZFS]][[Category:NeedsContributions]] |
Revision as of 18:46, 30 April 2015
Introduction
This page is an attempt to provide some information on how to install Lustre with a ZFS backend. You are encouraged to add your own version, either as a separate section or by editing this page into a general guide.
Helpful links
- http://zfsonlinux.org/lustre-configure-single.html
- http://www.ufb.rug.nl/ger/docs/lustre-zfs.txt
- https://github.com/chaos/lustre/commit/04a38ba7 - ZFS and HA
SSEC Example
This version applies to systems with JBODs where ZFS manages the disk directly without a Dell Raid Controller in between. This guide is very specific for a single installation at UW SSEC: versions have changed, and we use puppet to provide various software packages and configurations. However, it is included as some information may be useful to others.
- Lustre Server Prep Work
- OS Installation (RHEL6)
- You must use the RHEL/Centos 6.4 Kernel 2.6.32-358
- Use the "lustre" kickstart option which installs a 6.4 kernel
- Define the host in puppet so that it is not a default host - NOTE: We Use Puppet at SSEC to distribute various required packages, other environments will vary!
- Lustre 2.4 installation
- Puppet Modules Needed
- OS Installation (RHEL6)
- zfs-repo
- lustre-healthcheck
- ib-mellanox
- check_mk_agent-ssec
- puppetConfigFile
- lustre-shutdown
- nagios_plugins
- lustre24-server-zfs
- selinux-disable
- Configure Metadata Controller
- Map metadata drives to enclosures (with scripts to help)
- For our example mds system we made aliases for 'ssd0' ssd1 ssd2 and ssd3
- put these in /etc/zfs/vdev_id.conf - for example:
- alias arch03e07s6 /dev/disk/by-path/pci-0000:04:00.0-sas-0x5000c50056b69199-lun-0
- run udevadm trigger to load drive aliases
- On metadata controller, run mkfs.lustre to create metadata partition. On our example system:
- Use separate MGS for multiple filesystems on same metadata server.
- Separate MGS: mkfs.lustre --mgs --backfstype=zfs lustre-meta/mgs mirror d2 d3 mirror d4 d5
- Separate MDT: mkfs.lustre --fsname=arcdata1 --mdt --mgsnode=172.16.23.14@o2ib --backfstype=zfs lustre-meta/arcdata1-meta
- Create /etc/ldev.conf and add the metadata partition. On example system, we added:
- geoarc-2-15 - MGS zfs:lustre-meta/mgs geoarc-2-15 - arcdata-MDT0000 zfs:lustre-meta/arcdata-meta
- Create /etc/modprobe.d/lustre.conf
- options lnet networks="o2ib" routes="tcp metadataip@o2ib0 172.16.24.[220-229]@o2ib0"
- NOTE: if you do not want routing, or if you are having trouble with setup, the simple options lnet networks="o2ib" is fine
- Start Lustre. If you have multiple metadata mounts, you can just run service lustre start.
- Add lnet service to chkconfig and ensure on startup. We may want to leave lustre off on startup for metadata controllers.
- Configure OSTs
- Map drives to enclosures (with scripts to help!)
- Run udevadm trigger to load drive aliases.
- mkfs.lustre on MD1200s.
- Example RAIDZ2 on one MD1200: mkfs.lustre --fsname=cove --ost --backfstype=zfs --index=0 --mgsnode=172.16.24.12@o2ib lustre-ost0/ost0 raidz2 e17s0 e17s1 e17s2 e17s3 e17s4 e17s5 e17s6 e17s7 e17s8 e17s9 e17s10 e17s11
- Example RAIDZ2 with 2 disks from each enclosure, 5 enclosures (our cove test example): mkfs.lustre --fsname=cove --ost --backfstype=zfs --index=0 --mgsnode=172.16.24.12@o2ib lustre-ost0/ost0 raidz2 e13s0 e13s1 e15s0 e15s1 e17s0 e17s1 e19s0 e19s1 e21s0 e21s1
- Repeat as necessary for additional enclosures.
- Create /etc/ldev.conf
- Example on lustre2-8-11:
- lustre2-8-11 - cove-OST0000 zfs:lustre-ost0/ost0 lustre2-8-11 - cove-OST0001 zfs:lustre-ost1/ost1 lustre2-8-11 - cove-OST0002 zfs:lustre-ost2/ost2
- Start OSTs. Example: service lustre start. Repeat as necessary for additional enclosures.
- Add services to chkconfig and setup.
- Configure backup metadata controller (future)
- Mount the Lustre file system on clients
- Add entry to /etc/fstab. With our example system, our fstab entry is:
- 172.16.24.12@o2ib:/cove /cove lustre defaults,_netdev,user_xattr 0 0
- Create empty folder for mountpoint, and mount file system (e.g., mkdir /cove; mount /cove).