Configuring the Lustre File System

From Lustre Wiki
Jump to navigation Jump to search
The printable version is no longer supported and may have rendering errors. Please update your browser bookmarks and please use the default browser print function instead.
Note: This page originated on the old Lustre wiki. It was identified as likely having value and was migrated to the new wiki. It is in the process of being reviewed/updated and may currently have content that is out of date.

(Updated: Sep 2009)

This page describes how to configure a simple Lustre™ file system comprised of a combined MGS/MDT, an OST and a client. The administrative utilities provided with Lustre, however, can be used to set up systems with many different configurations.

Note: We recommend that you use dotted-quad (dot-decimal) notation for IP addresses (IPv4) rather than host names. This aids in reading debug logs and helps when debugging configurations with multiple interfaces.

Configuring the Lustre File System

This section contains a procedure for configuring the Lustre File System. For an example showing configuration of a Lustre installation comprising a combined MGS/MDT, an OST and a client see the Lustre Configuration Example.

To configure Lustre Networking (LNET) and the Lustre file system, complete these steps:

1. Define the module options for Lustre networking (LNET) by adding this line to the /etc/modprobe.conf file. The modprobe.conf file is a Linux file that specifies which parts of the kernel are loaded.

options lnet networks=<network interfaces that LNET can use>
This step restricts LNET to using only the specified network interfaces and prevents LNET from using all network interfaces.
As an alternative to modifying the modprobe.conf file, you can modify the modprobe.local file or the configuration files in the modprobe.d directory.
Note: For details on configuring networking and LNET, see Chapter 2: Understanding Lustre Networking (LNET) in the Lustre Operations Manual.

2. (Optional) Prepare the block devices to be used as OSTs or MDTs. Depending on the hardware used in the MDS and OSS nodes, you may want to set up a hardware or software RAID to increase the reliability of the Lustre system. For more details on how to set up a hardware or software RAID, see the documentation for your RAID controller or see Chapter 6: Configuring Storage on a Lustre File System in the Lustre Operations Manual.

3. Create a combined MGS/MDT file system on the block device. On the MDS node, run:

mkfs.lustre --fsname=<fsname> --mgs --mdt <block device name>
The default file system name (fsname) is lustre.
Note: If you plan to generate multiple file systems, the MGS should be on its own dedicated block device.

4. Mount the combined MGS/MDT file system on the block device. On the MDS node, run:

mount -t lustre <block device name> <mount point>

5. Create the OST. On the OSS node, run:

mkfs.lustre --ost --fsname=<fsname> --mgsnode=<NID> <block device name>
You can have as many OSTs per OSS as the hardware or drivers allow.
You should use only one OST per block device. Optionally, you can create an OST which uses the raw block device and does not require partitioning.
Note: If the block device has more than 8 TB of storage, it must be partitioned due to the ext3 file system limitation. Lustre can support block devices with multiple partitions, but they are not recommended because bottlenecks may result.

6. Mount the OST. On the OSS node where the OST was created, run:

mount -t lustre <block device name> <mount point>
Note: To create additional OSTs, repeat Steps 5 and 6.

7. Create the client (mount the file system on the client). On the client node, run:

mount -t lustre <MGS node>
/<fsname> <mount point>
Note: To create additional clients, repeat Step 7]].

8. Verify that the file system started and is working by running the UNIX commands df, dd and ls on the client node.

a. Run the df command.
[root@client1 /] df -h
b. Run the dd command.
[root@client1 /] cd /lustre
[root@client1 /lustre] dd if=/dev/zero of=/lustre/zero.dat bs=4M count=2

c. Run the ls command.

[root@client1 /lustre] ls -lsah

If you have a problem mounting the file system, check the syslogs for errors.

Lustre Configuration Example

This Lustre™ configuration example illustrates the configuration steps for a simple Lustre installation comprising a combined MGS/MDT, an OST and a client, where:

Variable Setting Variable Setting
network type TCP/IP MGS node 10.2.0.1@tcp0
block device /dev/sdb OSS 1 node oss1
file system temp clientnode client1
mount point /mnt/mdt OST 1 ost1
mount point /lustre

1. Define the module options for Lustre networking (LNET), by adding this line to the /etc/modprobe.conf file.

options lnet networks=tcp

2. Create a combined MGS/MDT file system on the block device. On the MDS node, run:

[root@mds /]# mkfs.lustre --fsname=temp --mgs --mdt /dev/sdb

This command generates this output:

     Permanent disk data:
Target:       temp-MDTffff
Index:        unassigned
Lustre FS:    temp
Mount type:   ldiskfs
Flags:        0x75
     (MDT MGS needs_index first_time update )
Persistent mount opts: errors=remount-ro,iopen_nopriv,user_xattr
Parameters: mdt.group_upcall=/usr/sbin/l_getgroups

checking for existing Lustre data: not found
device size = 16MB
2 6 18
formatting backing filesystem ldiskfs on /dev/sdb
     target name   temp-MDTffff
     4k blocks     0
     options       -i 4096 -I 512 -q -O dir_index,uninit_groups -F
mkfs_cmd = mkfs.ext2 -j -b 4096 -L temp-MDTffff -i 4096 -I 512 -q -O
dir_index,uninit_groups -F /dev/sdb
Writing CONFIGS/mountdata

3. Mount the combined MGS/MDT file system on the block device. On the MDS node, run:

[root@mds /]# mount -t lustre /dev/sdb /mnt/mdt

This command generates this output:

Lustre: temp-MDT0000: new disk, initializing
Lustre: 3009:0:(lproc_mds.c:262:lprocfs_wr_group_upcall()) \
temp-MDT0000: group upcall set to /usr/sbin/l_getgroups
Lustre: temp-MDT0000.mdt: set parameter \
group_upcall=/usr/sbin/l_getgroups
Lustre: Server temp-MDT0000 on device /dev/sdb has started

4. Create the OST. On the OSS node, run:

[root@oss1 /]# mkfs.lustre --ost --fsname=temp --mgsnode= 10.2.0.1@tcp0 /dev/sdb

The command generates this output:

     Permanent disk data:
Target:      temp-OSTffff
Index:       unassigned
Lustre FS:   temp
Mount type:  ldiskfs
Flags:       0x72
(OST needs_index first_time update)
Persistent mount opts: errors=remount-ro,extents,mballoc
Parameters: mgsnode=10.2.0.1@tcp

checking for existing Lustre data: not found
device size = 16MB
2 6 18
formatting backing filesystem ldiskfs on /dev/sdc
     target name    temp-OSTffff
     4k blocks      0
     options        -I 256 -q -O dir_index,uninit_groups -F
mkfs_cmd = mkfs.ext2 -j -b 4096 -L temp-OSTffff -I 256 -q -O
dir_index,uninit_groups -F /dev/sdc
Writing CONFIGS/mountdata

5. Mount the OST. On the OSS node, run:

root@oss1 /] mount -t lustre /dev/sdb /mnt/ost1

The command generates this output:

LDISKFS-fs: file extents enabled
LDISKFS-fs: mballoc enabled
Lustre: temp-OST0000: new disk, initializing
Lustre: Server temp-OST0000 on device /dev/sdb has started

Shortly afterwards, this output appears:

Lustre: temp-OST0000: received MDS connection from 10.2.0.1@tcp0
Lustre: MDS temp-MDT0000: temp-OST0000_UUID now active, resetting
orphans

6. Mount the file system on the client. On the client node, run:

root@client1 /] mount -t lustre 10.2.0.1@tcp0:/temp /lustre

This command generates this output:

Lustre: Client temp-client has started

7. Verify that the file system started and is working by running the UNIX commands df, dd and ls on the client node.

a. Run the df command:

[root@client1 /] df -h
This command generates output similar to:
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/VolGroup00-LogVol00
7.2G 2.4G 4.5G 35% /
dev/sda1 99M 29M 65M 31% /boot
tmpfs 62M 0 62M 0% /dev/shm
10.2.0.1@tcp0/:temp 30M 8.5M 20M 30% /lustre

b. Run the dd command:

[root@client1 /] cd /lustre
[root@client1 /lustre] dd if=/dev/zero of=/lustre/zero.dat bs=4M
count=2
This command generates output similar to:
2+0 records in
2+0 records out
8388608 bytes (8.4 MB) copied, 0.159628 seconds, 52.6 MB/s

c. Run the ls command:

[root@client1 /lustre] ls -lsah
This command generates output similar to:
total 8.0M
4.0K drwxr-xr-x 2 root root 4.0K Oct 16 15:27 .
8.0K drwxr-xr-x 25 root root 4.0K Oct 16 15:27 ..
8.0M -rw-r--r-- 1 root root 8.0M Oct 16 15:27 zero.dat

Lustre Configuration Utilities

Once the Lustre file system is configured, it is ready for use. If additional configuration is necessary, see Lustre System Configuration Utilities.