Understanding Lustre Internals

What is Lustre?
Lustre is a GNU General Public licensed, open-source distributed parallel file system developed and maintained by DataDirect Networks (DDN). Due to the extremely scalable architecture of the Lustre file system, Lustre deployments are popular in scientific supercomputing, as well as in the oil and gas, manufacturing, rich media, and finance sectors. Lustre presents a POSIX interface to its clients with parallel access capabilities to the shared file objects. As of this writing, Lustre is the most widely used file system on the top 500 fastest computers in the world. Lustre is the file system of choice on 7 out of the top 10 fastest computers in the world today, over 70% of the top 100, and also for over 60% of the top 500.

Lustre Features
Lustre is designed for scalability and performance. The aggregate storage capacity and file system bandwidth can be scaled up by adding more servers to the file system, and performance for parallel applications can often be increased by utilizing more Lustre clients. Some practical limits are shown in Table 1 along with values from known production file systems.

Lustre has several features that enhance performance, usability, and stability. Some of these features include:


 * POSIX Compliance: With few exceptions, Lustre passes the full POSIX test suite. Most operations are atomic to ensure that clients do not see stale data or metadata. Lustre also supports  file IO.


 * Online file system checking: Lustre provides a file system checker (LFSCK) to detect and correct file system inconsistencies. LFSCK can be run while the file system in online and in production, minimizing potential downtime.


 * Controlled file layouts: The file layouts that determine how data is placed across the Lustre servers can be customized on a per-file basis. This allows users to optimize the layout to best fit their specific use case.


 * Support for multiple backend file systems: When formatting a Lustre file system, the underlying storage can be formatted as either ldiskfs (a performance-enhanced version of ext4) or ZFS.


 * Support for high-performance and heterogeneous networks: Lustre can utilize RDMA over low latency networks such as Infiniband or Intel OmniPath in addition to supporting TCP over commodity networks. The Lustre networking layer provides the ability to route traffic between multiple networks making it feasible to run a single site-wide Lustre file system.


 * High-availability: Lustre supports active/active failover of storage resources and multiple mount protection (MMP) to guard against errors that may results from mounting the storage simultaneously on multiple servers. High availability software such as Pacemaker/Corosync can be used to provide automatic failover capabilities.


 * Security features: Lustre follows the normal UNIX file system security model enhanced with POSIX ACLs. The root squash feature limits the ability of Lustre clients to perform privileged operations. Lustre also supports the configuration of Shared-Secret Key (SSK) security.


 * Capacity growth: File system capacity can be increased by adding additional storage for data and metadata while the file system in online.

Lustre Components
Lustre is an object-based file system that consists of several components:




 * Management Server (MGS) - Provides configuration information for the file system. When mounting the file system, the Lustre clients will contact the MGS to retrieve details on how the file system is configured (what servers are part of the file system, failover information, etc.). The MGS can also proactively notify clients about changes in the file system configuration and plays a role in the Lustre recovery process.


 * Management Target (MGT) - Block device used by the MGS to persistently store Lustre file system configuration information. It typically only requires a relatively small amount of space (on the order to 100 MB).


 * Metadata Server (MDS) - Manages the file system namespace and provides metadata services to clients such as filename lookup, directory information, file layouts, and access permissions. The file system will contain at least one MDS but may contain more.


 * Metadata Target (MDT) - Block device used by an MDS to store metadata information. A Lustre file system will contain at least one MDT which holds the root of the file system, but it may contain multiple MDTs. Common configurations will use one MDT per MDS server, but it is possible for an MDS to host multiple MDTs. MDTs can be shared among multiple MDSs to support failover, but each MDT can only be mounted by one MDS at any given time.


 * Object Storage Server (OSS) - Stores file data objects and makes the file contents available to Lustre clients. A file system will typically have many OSS nodes to provide a higher aggregate capacity and network bandwidth.


 * Object Storage Target (OST) - Block device used by an OSS node to store the contents of user files. An OSS node will often host several OSTs. These OSTs may be shared among multiple hosts, but just like MDTs, each OST can only be mounted on a single OSS at any given time. The total capacity of the file system is the sum of all the individual OST capacities.


 * Lustre Client - Mounts the Lustre file system and makes the contents of the namespace visible to the users. There may be hundreds or even thousands of clients accessing a single Lustre file sysyem. Each client can also mount more than one Lustre file system at a time.


 * Lustre Networking (LNet) - Network protocol used for communication between Lustre clients and servers. Supports RDMA on low-latency networks and routing between heterogeneous networks.

The collection of MGS, MDS, and OSS nodes are sometimes referred to as the “frontend”. The individual OSTs and MDTs must be formatted with a local file system in order for Lustre to store data and metadata on those block devices. Currently, only ldiskfs (a modified version of ext4) and ZFS are supported for this purpose. The choice of ldiskfs or ZFS if often referred to as the “backend file system”. Lustre provides an abstraction layer for these backend file systems to allow for the possibility of including other types of backend file systems in the future.

Figure 1 shows a simplified version of the Lustre file system components in a basic cluster. In this figure, the MGS server is distinct from the MDS servers, but for small file systems, the MGS and MDS may be combined into a single server and the MGT may coexist on the same block device as the primary MDT.

Lustre File Layouts


Lustre stores file data by splitting the file contents into chunks and then storing those chunks across the storage targets. By spreading the file across multiple targets, the file size can exceed the capacity of any one storage target. It also allows clients to access parts of the file from multiple Lustre servers simultaneously, effectively scaling up the bandwidth of the file system. Users have the ability to control many aspects of the file’s layout by means of the  command, and they can query the layout for an existing file using the command.

File layouts fall into one of two categories:


 * 1) Normal / RAID0 - File data is striped across multiple OSTs in a round-robin manner.
 * 2) Composite - Complex layouts that involve several components with potentially different striping patterns.

Normal (RAID0) Layouts
A normal layout is characterized by a stripe count and a stripe size. The stripe count determines how many OSTs will be used to store the file data, while the stripe size determines how much data will be written to an OST before moving to the next OST in the layout. As an example, consider the file layouts shown in Figure 2 for a simple file system with 3 OSTs residing on 3 different OSS nodes. Note that Lustre indexes the OSTs starting at zero.

File A has a stripe count of three, so it will utilize all OSTs in the file system. We will assume that it uses the default Lustre stripe size of 1MB. When File A is written, the first 1MB chunk gets written to OST0. Lustre then writes the second 1MB chunk of the file to OST1 and the third chunk to OST2. When the file exceeds 3 MB in size, Lustre will round-robin back to the first allocated OST and write the fourth 1MB chunk to OST0, followed by OST1, etc. This illustrates how Lustre writes data in a RAID0 manner for a file. It should be noted that although File A has three chunks of data on OST0 (chunks #1, #4, and #7), all these chunks reside in a single object on the backend file system. From Lustre’s point of view, File A consists of three objects, one per OST. Files B and C show layouts with the default Lustre stripe count of one, but only File B uses the default stripe size of 1MB. The layout for File C has been modified to use a larger stripe size of 2MB. If both File B and File C are 2MB in size, File B will be treated as two consecutive chunks written to the same OST whereas File C will be treated as a single chunk. However, this difference is mostly irrelevant since both files will still consist of a single 2MB object on their respective OSTs.

Composite Layouts
A composite layout consists of one or more components each with their own specific layout. The most basic composite layout is a Progressive File Layout (PFL). Using PFL, a user can specify the same parameters used for a normal RAID0 layout but additionally specify a start and end point for that RAID0 layout. A PFL can be viewed as an array of normal layouts each of which covers a consecutive non-overlapping region of the file. PFL allows the data placement to change as the file increases in size, and because Lustre uses delayed instantiation, storage for subsequent components is allocated only when needed. This is particularly useful for increasing the stripe count of a file as the file grows in size.

The concept of a PFL has been extended to include two other layouts: Data on MDT (DoM) and Self Extending Layout (SEL). A DoM layout is specified just like a PFL except that the first component of the file resides on the same MDT as the file’s metadata. This it typically used to store small amounts of data for quick access. A SEL is just like a PFL with the addition that an extent size can be supplied for one or more of the components. When a component is instantiated, Lustre only instantiates part of the component to cover the extent size. When this limit is exceeded, Lustre examines the OSTs assigned to the component to determine if any of them are running low on space. If not, the component is extended by the extent size. However, if an OST does run low on space, Lustre can dynamically shorten the current component and choose a different set of OSTs to use for the next component of the layout. This can safeguard against full OSTs that might generate a  error when a user attempts to append data to a file.

Lustre has a feature called File Level Redundancy (FLR) that allows a user to create one or more mirrors of a file, each with its own specific layout (either normal or composite). When the file layout is inspected using, it appears like any other composite layout. However, the  field is used to identify which mirror each component belongs to.

Distributed Namespace


The metatdata for the root of the Lustre file system resides on the primary MDT. By default, the metadata for newly created files and directories will reside on the same MDT as that of the parent directory, so without any configuration changes, the metadata for the entire file system would reside on a single MDT. In recent versions, a featured called Distributed Namespace (DNE) was added to allow Lustre to utilize multiple MDTs and thus scale up metadata operations. DNE was implemented in multiple phases, and DNE Phase 1 is referred to as Remote Directories. Remote Directories allow a Lustre administrator to assign a new subdirectory to a different MDT if its parent directory resides on MDT0. Any files or directories created in the remote directory also reside on the same MDT as the remote directory. This creates a static fan-out of directories from the primary MDT to other MDTs in the file system. While this does allow Lustre to spread overall metadata operations across mutliple servers, operations with any single directory are still constrained by the performance of a single MDS node. The static nature also prevents any sort of dynamic load balancing across MDTs.

DNE Phase 2, also known as Striped Directories, removed some of these limitations. For a striped directory, the metadata for all files and subdirectories contained in that directory are spread across multiple MDTs. Similar to how a file layout contains a stripe count, a striped directory also has a stripe count. This determines how many MDTs will be used to spread out the metadata. However, unlike file layouts which spread data across OSTs in a round-robin manner, a striped directory uses a hash function to calculate the MDT where the metadata should be placed. The upcoming DNE Phase 3 expands upon the ideas in DNE Phase 2 to support the creation of auto-striped directories. An auto-striped directory will start with a stripe count of 1 and then dynamically increase the stripe count as the number of files/subdirectories in that directory grows. Users can then utilize striped directories without knowing a priori how big the directory might become or having to worry about choosing a directory stripe count that is too low or too high.

File Identifiers and Layout Attributes


Lustre identifies all objects in the file system through the use of File Identifiers (FIDs). A FID is a 128-bit opaque identifier used to uniquely reference an object in the file system in much the same way that ext4 uses inodes or ZFS uses dnodes. When a user accesses a file, the filename is used to lookup the correct directory entry which in turn provides the FID for the MDT object corresponding to that file. The MDT object contains a set of extended attributes, one of which is called the Layout Extended Attribute (or Layout EA). This Layout EA acts as a map for the client to determine where the file data is actually stored, and it contains a list of the OSTs as well as the FIDs for the objects on those OSTs that hold the actual file data. Figure 3 shows an example of accessing a file with a normal layout of stripe count 3.

Lustre Software Stack


The Lustre software stack is composed of several different layered components. To provide context for more detailed discussions later, a basic diagram of these components is illustrated in Figure 4. The arrows in this diagram represent the flow of a request from a client to the Lustre servers. System calls for operations like read and write go through the Linux Virtual File System (VFS) layer to the Lustre LLITE layer which implements the necessary VFS operations. If the request requires metadata access, it is routed to the Logical Metadata Volume (LMV) that acts as an abstraction layer for the Metadata Client (MDC) components. There is a MDC component for each MDT target in the file system. Similarly, requests for data are routed to the Logical Object Volume (LOV) which acts as an abstraction layer for all of the Object Storage Client (OSC) components. There is an OSC component for each OST target in the file system. Finally, the requests are sent to the Lustre servers by first going through the Portal RPC (PTL-RPC) subsystem and then over the wire via the Lustre Networking (LNet) subsystem.

Requests arriving at the Lustre servers follow the reverse path from the LNet subsystem up through the PTL-RPC layer, finally arriving at either the OSS component (for data requests) or the MDS component (for metadata requests). Both the OSS and MDS components are multi-threaded and can handle requests for multiple storage targets (OSTs or MDTs) on the same server. Any locking requests are passed to the Lustre Distributed Lock Manager (LDLM). Data requests are passed to the OBD Filter Device (OFD) and then to the Object Storage Device (OSD). Metadata requests go from the MDS straight to the OSD. In both cases, the OSD is responsible for interfacing with the backend file system (either ldiskfs or ZFS) through the Linux VFS layer.

Figure 5 provides a simple illustration of the interactions in the Lustre software stack for a client requesting file data. The Portal RPC and LNet layers are represented by the arrows showing communications between the client and the servers. The client begins by sending a request through the MDC to the MDS to open the file. The MDS server responds with the Layout EA for the file. Using this information, the client can determine which OST objects hold the file data and send requests through the LOV/OSC layer to the OSS servers to access the data.

Detailed Discussion of Lustre Components
The descriptions of key Lustre concepts provided in this overview are intended to provide a basis for the more detailed discussion in subsequent Sections. The remaining Sections dive deeper into the following topics:


 * Section 2 (Tests): Describes the testing framework used to test Lustre functionality and detect regressions.
 * Section 3 (Utils): Covers command line utilities used to format and configure a Lustre file systems as well as user tools for setting file striping parameters.
 * Section 4 (MGC): Discusses the MGC subsystem responsible for communications between Lustre nodes and the Lustre management server.
 * Section 5 (Obdclass): Discusses the obdclass subsystem that provides an abstraction layer for other Lustre components including MGC, MDC, OSC, LOV, and LMV.
 * Section 6 (Libcfs): Covers APIs used for process management and debugging support.
 * Section 7 (File Identifiers, FID Location Database, and Object Index): Explains how object identifiers are generated and mapped to data on the backend storage.

This document extensively references parts of Lustre source code maintained by open source community.

TESTS
This Section describes various tests and testing frameworks used to test Lustre functionality and performance.

Lustre Test Suites
Lustre Test Suites (LTS) is the largest collection of tests used to test Lustre file system. LTS consists of over 1600 tests, organized by their purpose and function. It is mainly composed of bash scripts, C programs and external applications. LTS provides various utilities to create, start, stop and execute tests. LTS can be used to execute test process automatically or in discrete steps. Using LTS the test process can be run as a group of tests or individual tests. LTS also allows to experiment with configurations and features such as ldiskfs, ZFS, DNE and HSM (Hierarchical Storage Manager). Tests in LTS are located in  directory in Lustre source tree and the major components in the test suite are given in Table 2.

Terminology
In this Section, we describe relevant terminology related to Lustre Test Suites. All scripts and applications packaged as part of the  and   are termed as Lustre Test Suites. The individual suites of tests contained in  directory are termed as Test Suite. An example test suite is. A test suite is composed of Individual Tests. An example of an individual test is  from   test suite. Test suites can be bundled into a group for back-to-back execution (e.g. ). Some of the LTS test examples include - Regression, Feature Specific , Configuration , Recovery and Failures  and so on. Some of the active Lustre unit, feature and regression tests and their short description are given in Table 3.

Testing Lustre Code
When programming with Lustre, the best practices for testing are test often and test early in the development cycle. Before submitting the code changes to Lustre source tree, developer must ensure that the code passes acceptance-small test suite. To create a new test case, first find the bug that reproduces an issue, fix the bug and then verify the fixed code passes the existing tests. This means that the newly found bug/defect is not covered by the test cases from the test suite. After making sure that any of the existing test cases do not cover the new defect, a new test case can be introduced to exclusively test the bug.

Bypassing Failures
While testing Lustre, if one or more test cases are failing due to an issue not related to the bug that is currently being fixed, bypass option is available for the failing tests. For e.g., to skip  sub-tests 36g and 65 and all of   set the environment as,

export SANITY_EXCEPT=&quot;36g 65&quot; export INSANITY=no A single line command can also be used to skip these tests when running  test, as shown below.

SANITY_EXCEPT=&quot;36g 65&quot; INSANITY=no ./acceptance-small.sh

Test Framework Options
The examples below show how to run a full test or sub-tests from the  test suite.

  Run all tests in a test suite with the default setup. cd lustre/tests sh ./acceptance-small.sh   Run only  and   tests from   test suite. ACC_SM_ONLY=&quot;recovery-small conf-sanity&quot; sh ./acceptance-small.sh   Run only tests 1, 3 and 4 from. ONLY=&quot;1 3 4&quot; sh ./sanity.sh   skip tests 1 to 30 and run remaining tests in. EXCEPT=&quot;`seq 1 30`&quot; sh sanity.sh 

Lustre provides flexibility to easily add new tests to any of its test scripts.

Acceptance Small (acc-sm) Testing on Lustre
testing for Lustre is used to catch bugs in the early development cycles. The  test scripts are located in the   directory. test suite contains three branches -  branch (18 tests),   branch (28 tests), and   branch (30 tests). The functionality of some of the commonly used tests in  suite is listed in Table 4. The order in which tests need to be executed is defined in the  script and in each test script.

Lustre Tests Environment Variables
This Section describes environment variables used to drive the Lustre tests. The environment variables are typically stored in a configuration script in, accessed by   environment variable within the test scripts. The default configuration for a single-node test is, which accesses the   configuration file. Some of the important environment variables and their purpose for Lustre cluster configuration are listed in Table 5.

Introduction
The administrative utilities provided with Lustre software allow to set up Lustre file system in different configurations. Lustre utilities provide a wide range of configuration options for creating a file system on a block device, scaling Lustre file system by adding additional OSTs or clients, changing stripe layout for data etc. Examples of some Lustre utilities include,


 * - This utility is used to format a disk for a Lustre service.
 * - This is used to modify configuration information on a Lustre target disk.
 * -  is used to control Lustre features via ioctl interfaces, including various configuration, maintenance and debugging features.
 * - This is used to start Lustre client or server on a target.
 * -  is used for configuring and querying options related to files.

In the following Sections we describe various user utilities and system configuration utilities in detail.

User Utilities
In this Section we describe a few important user utilities provided with Lustre.

lfs
can be used for configuring and monitoring Lustre file system. Some of most common uses of  are, create a new file with a specific striping pattern, determine default striping pattern, gather extended attributes for specific files, find files with specific attributes, list OST information and set quota limits. Some of the important  options are shown in Table 6.

A few examples on the usage of lfs utility is shown below.

  Create a file name  striped on three OSTs with 32KB on each stripe $ lfs setstripe -s 32k -c 3 /mnt/lustre/file1   Show the default stripe pattern on a given directory. $ lfs getstripe -d /mnt/lustre/dir1   List detailed stripe allocation for a give. lfs getstripe -v /mnt/lustre/file2 </li></ul>

lfs _migrate
The  utility is used to migrate file data between Lustre OSTs. This utility does the migration in two steps. It first copies the specified files to set of temporary files. This can be performed using  options, if specified. It can also optionally verify if the file contents have changed or not. The second step is to then swap the layout between the temporary file and the original file (or even renaming the temporary file to the original filename). is a tool that helps users to balance or manage space usage among Lustre OSTs.

lctl
The  utility is used for controlling and configuring Lustre file system. allows the following capabilities - control Lustre via an ioctl interface, setup Lustre in various configurations, and access debugging features of Lustre. Issuing  command on Lustre client gives a prompt that allows to execute   sub-commands. Some of the common commands associated with  are.

To get help with  commands   or   can be used.

Another important use of  command is accessing Lustre parameters. provides a platform-independent interface to the Lustre tunables. When the file system is running,  command can be used to set parameters temporarily on the affected nodes. The syntax of this command is,

lctl set_param [-P] [-d] obdtype.obdname.property=value In this command,  option is used to set parameters permanently,   deletes permanent parameters. To obtain current Lustre parameter settings,  can be used on the desired node. For example,

lctl get_param [-n] obdtype.obdname.parameter Some of the common commands associated with  and their description are shown in Table 7.

llog_reader
The  utility translates a Lustre configuration log into human-readable form. The syntax of this utility is,

llog_reader filename reads and parses the binary formatted Lustre’s on-disk configuration logs. To examine a log file on a stopped Lustre server, mount its backing file system as  or , then use   to dump the log file’s contents. For example,

mount -t ldiskfs /dev/sda /mnt/mgs llog_reader /mnt/mgsCONFIGS/tfs-client This utility can also be used to examine the log files when Lustre server is running. The ldiskfs-enabled  utility can be used to extract the log file, for example,

debugfs -c -R 'dump CONFIGS/tfs-client /tmp/tfs-client' /dev/sds llog_reader /tmp/tfs-client

mkfs.lustre
The  utility is used to format a Lustre target disk. The syntax of this utility is,

mkfs.lustre target_type [options] device where  can be OST, MDT, networks to which to restrict this OST/MDT and MGS. After formatting, the disk using, it can be mounted to start the Lustre service. Two important options that can be specified along with this command are  and. The former forces a particular format for the backing file system such as  (default) or   and the later specifies the Lustre file system name of which the disk is part of (default name for file system is  ).

mount.lustre
The  utility is used to mount the Lustre file system on a target or client. The syntax of this utility is,

mount -t lustre [-o options] device mountpoint After mounting users can use the Lustre file system to create files/directories and execute several other Lustre utilities on the file system. To unmount a mounted file system the  command can be used as shown below.

umount device mountpoint Some of the important options used along with this utility are discussed below with the help of examples.

  The following  command mounts Lustre on a client at the mount point   with MGS running on a node with nid. mount -t lustre 10.1.0.1@tcp:/lustre /mnt/lustre </li>  To start the Lustre metadata service from  on a mount point   the following command can be used. mount -t lustre /dev/sda /mnt/mdt </li></ul>

tunefs.lustre
The  utility is used to modify configuration information on a Lustre target disk. The syntax of this utility is,

tunefs.lustre [options] /dev/device However the tunefs utility does not reformat the disk or erase the contents on the disk. The parameters specified using tunefs are set in addition to the old parameters by default. To erase old parameters and use newly specified parameters, use the following options with tunefs.

tunefs.lustre --erase-params --param=new_parameters

Introduction
The Lustre client software involves primarily three components, management client (MGC), a metadata client (MDC), and multiple object storage clients (OSCs), one corresponding to each OST in the file system. Among this, the management client acts as an interface between Lustre virtual file system layer and Lustre management server (MGS). MGS stores and provides information about all Lustre file systems in a cluster. Lustre targets register with MGS to provide information to MGS while Lustre clients contact MGS to retrieve information from it.

The major functionalities of MGC are Lustre log handling, Lustre distributed lock management and file system setup. MGC is the first obd device created in Lustre obd device life cycle. An obd device in Lustre provides a level of abstraction on Lustre components such that generic operations can be applied without knowing the specific devices you are dealing with. The remaining Sections describe MGC module initialization, various MGC obd operations and log handling in detail. In the following Sections we will be using the terms clients and servers to represent service clients and servers created to communicate between various components in Lustre. Whereas the physical nodes representing Lustre’s clients and servers will be explicitly mentioned as ‘Lustre clients’ and ‘Lustre servers’.

MGC Module Initialization
When the MGC module initializes, it registers MGC as an obd device type with Lustre using  as shown in Source Code 1. Obd device data and metadata operations are defined using the  and   structures respectively. Since MGC deals with metadata in Lustre, it has only  operations defined. However the metadata client (MDC) has both metadata and data operations defined since the data operations are used to implement Data on Metadata (DoM) functionality in Lustre. The  function passes ,  ,  ,  , and   as its arguments. is defined as  in.

<span id="code:class_register_type" label="code:class_register_type">Source code 1: class_register_type function defined in obdclass/genops.c

MGC obd Operations


MGC obd operations are defined by  structure as shown in Source Code 2. Note that all MGC obd operations are defined as function pointers. This type of programming style avoids complex switch cases and provides a level of abstraction on Lustre components such that the generic operations can be applied without knowing the details of specific obd devices.

<span id="code:mgc_obd_ops" label="code:mgc_obd_ops"> Source Code 2: mgc_obd_ops structure defined in mgc/mgc_request.c

In Lustre one of the ways two subsystems share data is with the help of  structure. To understand how the communication between two subsystems work let us take an example of  from the   structure. The subsystem llite makes a call to  (in  ) by passing a key  as an argument. But notice that llite invokes  instead of. is defined in  as shown in Figure 6. We can see that this function invokes an  macro by passing an   device structure and a   operation. The definition of this macro concatenates  with   (operation) so that the resulting function call becomes.



So how does llite make sure that this operation is directed specifically towards MGC obd device? from  has an argument called. The  structure is a type of   defined in   (refer Figure 7). And the  field from   is a type of   structure defined in. structure has a field  which is an   structure (defined in  ). Another MGC obd operation  retrieves export using the   structure. Two functions involved in this process are  and   defined in.

In the following Sections we describe some of the important MGC obd operations in detail.

mgc_setup
is the initial routine that gets executed to start and setup the MGC obd device. In Lustre MGC is the first obd device that is being setup as part of the obd device life cycle process. To understand when  gets invoked in the obd device life cycle, let us explore the workflow from the Lustre module initialization.



The Lustre module initialization begins from the  routine defined in   (shown in Figure 8). This routine is invoked when the  module gets loaded. invokes  which registers   as the file system and adds it to the list of file systems the kernel is aware of for mount and other syscalls. structure is defined in the same file as shown in Source Code 3.

When a user mounts Lustre, the  gets invoked as evident from this structure. is defined in the same file and which in turn calls  routine. The  invokes its call back function   which is also defined in. is the entry point for the mount call from the Lustre client into Lustre.

<span id="code:lustre_fs_type" label="code:lustre_fs_type"> Source code 3: lustre_fs_type structure defined in llite/super25.c

invokes  defined in. This sets up the MGC obd device to start processing startup logs. The  routine called here starts the MGC obd device (defined in  ). eventually leads to the invocation of obdclass specific routines  and   (described in detail in the Section 5) with the help of a   routine that takes obd device name and a lustre configuration command  as arguments. Various lustre configuration commands are  and so on. These are defined in  as shown in Source Code 4.

<span id="code:lcfg_command_type" label="code:lcfg_command_type"> Source code 4: Lustre configuration commands defined in include/uapi/linux/lustre/lustre_cfg.h

The first  that is being passed to   routine is   which will result in the invocation of obdclass function. We will describe  in detail in Section 5. The second  passed to   function is   which will result in the invocation of   eventually. calls  (defined in  ) and passes the   that it received. In case of  command the   routine gets invoked. This is defined in the same file and its primary duty is to create hashes and self export and call obd device specific setup. The device specific setup call is in turn invoked through another routine called. is defined in  as an inline function in the same way   is defined. calls the device specific setup routine with the help of the  macro (refer Section 4.3 and Figure 6). Here, in case of MGC obd device  defined as part of the   structure (shown in Source Code 2) gets invoked by the   routine. Note that the yellow colored blocks in Figure 8 will be referenced again in Section 5 to illustrate the lifecycle of the MGC obd device.

Operation
first adds a reference to the underlying Lustre PTL-RPC layer. Then it sets up an RPC client for the obd device using  (defined in  ). Next  initializes Lustre logs which will be processed by MGC at the MGS server. These logs are also sent to the Lustre client and the client side MGC mirrors these logs to process the data. The tunable parameters persistently set at MGS are sent to MGC and Lustre logs processed at the MGC initializes these parameters. In Lustre the tunables have to be set before Lustre logs are processed and  helps to initialize these tunables. Few examples of the tunables set by this function are,  ,   and   and can be viewed in   directory by logging into any Lustre client. starts an  which keeps reading the lustre logs as the entries come in. A flowchart showing the  workflow is shown in Figure 10.

Lustre Log Handling
Lustre extensively makes use of logging for recovery and distributed transaction commits. The logs associated with Lustre are called  and config logs, startup logs and change logs correspond to various kinds of. As described in Section 3.2.4, the  utility can be used to read these Lustre logs. When a Lustre target registers with MGS, the MGS constructs a log for the target. Similarly, a  log is created for the Lustre client when it is mounted. When a user mounts the Lustre client, it triggers to download the Lustre config logs on the client. As described earlier MGC subsystem is responsible for reading and processing the logs and sending them to Lustre clients and Lustre servers.

Log Processing in MGC


The  routine described in Section 4.4 makes a call to   function defined in. This function initializes a config log instance specific to the super block passed from. Since the same MGC may be used to follow multiple config logs (e.g. ost1, ost2, Lustre client), the config log instance is used to keep the state for a specific log. Afterwards  invokes   which gets a config log from MGS and starts processing it. gets called for both Lustre clients and Lustre servers and it continues to process new statements appended to the logs. It first resets and allocates  (which temporarily store log data) and calls   which eventually invokes the obd device specific   (as shown in Figure 9) with the help of   macro. The  passed to   is   which gets the config log from MGC, starts processing it and adds the log to list of logs to follow. defined in the same file accomplishes the task of adding the log to the list of active logs watched for updates by MGC. Few other important log processing functions in MGC are -  (that gets a configuration log from the MGS and processes it),   (called if the Lustre client was notified for target restarting by the MGS), and   (applies the logs after recovery).

mgc_precleanup and mgc_cleanup


Cleanup functions are important in Lustre in case of file system unmounting or any unexpected errors during file system setup. The  routine defined in   starts the process of shutting down an obd device. This invokes  (through  ) which makes sure that all the exports are destroyed before shutting down the obd device. first decrements the  that was incremented during. The  keeps the count of the running MGC threads and makes sure not to shut down any threads prematurely. Next it waits for any requeue thread to gets completed and calls. destroys client side import interface of the obd device. Finally  invokes   which cleans up the lustre logs associated with the MGC. The log cleaning is accomplished by  routine defined in.

function deletes the profiles for the last MGC obd using  defined in. When MGS sends a buffer of data to MGC, the lustre profiles helps to identify the intended recipients of the data. Next the  routine (defined in  ) removes   and   entries for the obd device. It then decrements the reference to PTL-RPC layer and finally calls. This function (defined in ) makes the obd namespace point to NULL, destroys the client side import interface and finally frees up the obd device using   macro. Figure 10 shows the workflows for both setup and cleanup routines in MGC parallely. The  routine defined in   starts the MGC shut down process. Note that after the,   and   hashtables are freed up and destroyed. HT stores uuids for different obd devices where as  HT stores ptl-rpc network connection information.

mgc_import_event
The  function handles the events reported at the MGC import interface. The type of import events identified by MGC are listed in  enum defined in   as shown in Source Code 5. Client side imports are used by the clients to communicate with the exports on the server (for instance if MDS wants to communicate with MGS, MDS will be using its client import to communicate with MGS’ server side export). More detailed description of import and export interfaces on obd device is given in Section 5.

<span id="code:obd_import_event" label="code:obd_import_event"> Source code 5: obd_import_event enum defined in include/lustre_import.h

Some of the remaining obd operations for MGC such as,  ,   and   will be explained in obdclass and ldlm Sections.

Introduction
The obdclass subsystem in Lustre provides an abstraction layer that allows generic operations to be applied on Lustre components without having the knowledge of specific components. MGC, MDC, OSC, LOV, LMV are examples of obd devices in Lustre that make use of the obdclass generic abstraction layer. The obd devices can be connected in different ways to form client-server pairs for internal communication and data exchange in Lustre. Note that the client and server referred here are service clients and servers roles temporarily assumed by the obd devices but not physical nodes representing Lustre clients and Lustre servers.

Obd devices in Lustre are stored internally in an array defined in  as shown in Source Code 6. The maximum number of obd devices in Lustre per node is limited by  defined in   (shown in Source Code 7). The obd devices in the  array are indexed using an   number (see Source Code 8). An obd device can be identified using its minor number, name or uuid. A uuid is a unique identifier that Lustre assigns for obd devices. utility (described in Section 3.2.3) can be used to view all local obd devices and their uuids on Lustre clients and Lustre servers.

<span id="code:obd_devs" label="code:obd_devs"> Source code 6: obd_devs array defined in obdclass/genops.c

<span id="code:max_obd_devices" label="code:max_obd_devices"> Source code 7: MAX_OBD_DEVICES defined in include/obd.h

obd_device Structure
The structure that defines an obd device is shown in Source Code 8.

<span id="code:obd_device" label="code:obd_device"> Source Code 8: obd_device structure defined in include/obd.h

The first field in this structure is  as shown in Source Code 11 that defines the type of the obd device - a metadata or bulk data device or both. is used to identify data corruption with an obd device. Lustre assigns a magic number to the obd device during its creation phase and later asserts it in different parts of the source code to make sure that it returns the same magic number to ensure data integrity. As described in previous Section  is the index of the obd device in   array. An  entry indicates if the obd device is a real device such as an   or   type of (block) device. and  fields are used for uuid and name of the obd device as the field names suggest. structure also includes various flags to indicate the current status of the obd device. Some of those are  - means completed attach,   - finished setup,   - recovery expired,   - started cleanup,   - started setup and so on. and  are   and   hash tables for the obd device respectively. An obd device is also associated with several linked lists pointing to,  ,   and. Some of the remaining relevant fields of this structure are, kset and kobject device model abstractions, timeouts for recovery, proc entries, directory entry, procfs and debugfs variables.

MGC Life Cycle
As described in Section 4 MGC is the first obd device setup and started by Lustre in the obd device life cycle. To understand the lifecycle of MGC obd device let us start from the generic file system mount function. is directly invoked by the  system call from the user and handles the generic portion of mounting a file system. It then invokes file system specific mount function, that is  in case of Lustre. The  defined in   invokes the kernel function   as shown in Source Code 9 which invokes   as its call back function.

<span id="code:lustre_mount" label="code:lustre_mount"> Source code 9: lustre_mount function defined in llite/llite_lib.c

function is the entry point for the mount call into Lustre. This function initializes lustre superblock, which is used by the MGC to write a local copy of config log. The  routine calls   which initializes a config log instance specific for the superblock. The  structure is defined in   as shown in Source Code 10. The  field in this structure is unique to this superblock. This unique  is obtained using   function defined in. The  structure also has a uuid (obtained from   field of   structure defined in  ) and a callback handler defined by the function. We will come back to this callback handler later in the MGC life cycle process. The color coded blocks in Figure 11 were also part of  call graph shown in Figure 8 in Section 4.

<span id="code:config_llog_instance" label="code:config_llog_instance"> Source code 10: config_llog_instance structure is defined in include/obd_class.h



The file system name field of   structure is populated by copying the profile name obtained using the   function. defined in  obtains a profile name corresponding to the mount command issued from the user from the   structure.

Then  then invokes the   function (see Figure 11) which gets the config logs from MGS and starts processing them. This function is called from both Lustre clients and Lustre servers and it will continue to process new statements appended to the logs. is defined in. The three parameters passed to this function are superblock, logname and config log instance. The config instance is unique to the super block which is used by the MGC to write to the local copy of the config log and the logname is the name of the llog to be replicated from the MGS. The config log instance is used to keep the state for the specific config log (can be from ost1, ost2, Lustre client etc.) and is added to the MGC’s list of logs to follow. then calls  that uses the   macro (refer Section 4.3) to call MGC specific   function. gets the config log from the MGS and processes it to start any services. Logs are also added to the list of logs to watch.

We now describe the detailed workflow of  by describing the functionalities of each sub-function that it invokes. The  function categorizes the data in config log based on if the data is related to - ptl-rpc layer, configuration parameters, nodemaps and barriers. The log data related to each of these categories is then copied to memory using the function. next calls  and it gets a config log from MGS and processes it. This function is called for both Lustre clients and Lustre servers to process the configuration log from the MGS. The MGC enqueues a DLM lock on the log from the MGS and if the lock gets revoked, MGC will be notified by the lock cancellation callback that the config log has changed, and will enqueue another MGS lock on it, and then continue processing the new additions to the end of the log. Lustre prevents the updation of the same log by multiple processes at the same time. The  then calls   function which reads the log and creates a local copy of the log on the Lustre client or Lustre server. This function first initializes an environment and a context using  and   respectively. The  routine is used to create a local copy of the log with the environment and context previously initialized. Real time changes in the log are parsed using the function. Under read only mode, there will be no local copy or local copy will be incomplete, so Lustre will try to use remote llog first.

The  function is defined in. The arguments passed to this function are the environment, context and config log instance initialized in  function and the config log name. The first log that is being parsed by the  function is. contains configuration information for various Lustre file system components, obd devices and file system mounting process. first acquires a lock on the log to be parsed using a handler function. It then continues the processing of the log from where it last stopped till the end of the log. To process the logs two entities are used by this function - 1. an index to parse through the data in the log, and 2. a callback function that processes and interprets the data. The call back function can be a generic handler function like  or it can be customized. Note that this is the call back handler initialized by the  structure as previously mentioned in Source Code 10. Additionally, the callback function provides a config marker functionality that allows to inject special flags for selective processing of data in the log. The callback handler also initializes  to temporarily store the log data. Afterwards the following actions take place in this function: translate log names to obd device names, append uuid with obd device name for each Lustre client mount and finally attach the obd device.

Each obd device then sets up a key to communicate with other devices through secure ptl-rpc layer. The rules for creating this key are stored in the config log. The obd device then creates a connection for communication. Note that the start log contains all state information for all configuration devices and the lustre configuration buffer stores this information temporarily. The obd device then use this buffer to consume log data. The start log resembles to a virtual log file and it is never stored on the disk. After creating a connection, the handler performs data mining on the logs to extract information (uuid, nid etc.) required to form Lustre. The parameter  passed with   function decides what type of information should be parsed from the logs. For instance  indicates the handler to scan obd device configuration information and   asks to parse for changelog records. Using the extracted nid and uuid information about the obd device, the handler now invokes  routine. This function repeats the cycle of obd device creation for other obd devices. Notice that the only obd device exists in Lustre at this point in the life cycle is MGC. The  function calls the generic obd class functions such as ,  ,   depending upon the   that it receives for a specific obd device.

Obd Device Life Cycle
In this Section we describe the work flow of various obd device life cycle functions such as,  ,  ,  , and.

class_attach
The first method that is called in the life cycle of an obd device is  and the corresponding lustre config command is. The  method is defined in. It registers and adds the obd device to the list of obd devices. The list of obd devices is defined in  using. The attach function first checks if the obd device type being passed is valid. The  structure is defined in   (as shown in Source Code 11). Two types of operations defined in this structure are  (i.e., data operations) and   (i.e., metadata operations). These operations determine if the obd device is destined to perform data or metadata operations or both.

The  field of   structure makes sense only for real block devices such as   and   osd devices. Furthermore the  differentiates metadata and data devices using the tags   and   respectively. An example of an  structure defined for   is shown in Source Code 12.

<span id="code:obd_type" label="code:obd_type"> Source code 11: obd_type structure defined in include/obd.h

<span id="code:osd_device_type" label="code:osd_device_type"> Source code 12: lu_device_type structure for ldiskfs osd_device_type defined in osd-ldiskfs/osd_handler.c



The  then calls a   function which creates, allocates a new obd device and initializes it. A complete workflow of the  function is shown in Figure 12. The  function invoked by   registers already created obd device and loads the obd device module. All obd device loaded has metadata or data operations (or both) defined for them. For instance the LMV obd device has its  and   defined in structures   and   respectively. These structures and the associated operations can be seen in  file. The  initialized here is the index of the obd device in   array.

The obd device then creates a self export using the function. The  function invokes a   function which creates a new export, adds it to the hash table of exports and returns a pointer to it. Note that a self export is created only for a client obd device. The reference count for this export when created is 2, one for the hash table reference and the other for the pointer returned by this function itself. This function populates the  structure defined in   (shown in Source Code 13). Various fields associated with this structure are explained in the next Section. Two functions that are used to increment and decrement the reference count for obd devices are  and   respectively. The last part of  is registering/listing the obd device in the   array which is done through   function. This functions assigns a minor number to the obd device that can be used to lookup the device in the array.

obd_export Structure
This Section describes some of the relevant fields of the  structure (shown in Source Code 13) that represents a target side export connection (using ptlrpc layer) for obd devices in Lustre. This is also used to connect between layers on the same node when there is no network connection between the nodes. For every connected client there exists an export structure on the server attached to the same obd device. Various fields of this structure are described below.


 * - On connection establishment, the export handle id is provided to client and the subsequent client RPCs contain this handle id to identify which export they are talking to.
 * Set of counters described below is used to track where export references are kept.  is the number of RPC references,   counts commit callback references,   is the number of queued replay requests to be processed and   keeps track of the number of lock references.

<span id="code:obd_export" label="code:obd_export"> Source code 13: obd_export structure defined in include/lustre_export.h


 * maintains a linked list of all the locks and  is the spinlock that protects this list.
 * is the UUID of client connected to this export.
 * links all the exports on an obd device.
 * is used when the export connection is destroyed.
 * The structure also maintains several hash tables to keep track of,   and last received messages in case of recovery from failure..
 * The obd device for this export is defined by the pointer.
 * - This defines the portal rpc connection for this export.
 * - This lists all the ldlm locks granted on this export.
 * This structure also has additional fields such as hashes for posix deadlock detection, time for last request received, linked list to replay all requests waiting to be replayed on recovery, lists for RPCs handled, blocking ldlm locks and special union to deal with target specific data.

class_setup


The primary duties of  routine are create hashtables and self-export, and invoke the obd type specific setup function. As an initial step this function obtains the obd device from  array using   number and asserts the   number to make sure data integrity. Then it sets the  flag to indicate that the set up of this obd device has started (refer Source Code 8). Next the  and   hashtables are setup using Linux kernel builtin functions   and. For the  hashtable Lustre uses its custom implementation of hashtable namely.

A generic device setup function  defined in   is then invoked by   by passing the   structure populated and the corresponding lcfg command. This leads to the invocation of device specific setup routines from various subsystems such as,  ,   and so on. All of these setup routines invoke a  routine that acts as a pre-setup stage before the creation of imports for the clients as shown in Figure 13. The  defined in   function populates   structure defined in   as shown in Source Code 14. Note that the  routine is called only in case of client obd devices like osp, lwp, mgc, osc, and mdc.

<span id="code:client_obd" label="code:client_obd">Source Code 14: client_obd structure defined in include/obd.h

structure is mainly used for page cache and extended attributes management. It comprises of fields pointing to obd device uuid and import interfaces, counter to keep track of client connections and fields to represent maximum and default extended attribute sizes. Few other fields used for cache handling are  - LRU cache for caching OSC pages,   - available LRU slots per OSC cache,   - number of busy LRU pages, and   - number of LRU pages in the cache for this client_obd. Please also refer source code to see additional fields in the structure.

The  then obtains an LDLM lock to setup the LDLM layer references for this client obd device. Further it sets up ptl-rpc request and reply portals using the  routine defined in. The  structure defines a pointer to the   structure defined in. The  structure represents ptl-rpc imports that are client-side view of remote targets. A new import connection for the obd device is created using the function. The  method populates   structure defined in   as shown in Source Code 15.

The  structure represents the client side view of a remote target. This structure mainly consists of fields representing ptl-rpc layer client and active connections on it, client side ldlm handle and various flags representing the status of imports such as,  , and. There are also linked lists pointing to lists of requests that are retained for replay, waiting for a reply, and waiting for recovery to complete.

The  then adds an initial connection for the obd device to the ptl-rpc layer by invoking   method. This method uses ptl-rpc layer specific routine  to return a ptl-rpc connection specific for the uuid passed for the remote obd device. Finally  creates a new ldlm namespace for the obd device that it just set up using the   routine. This completes the setup phase in the obd device lifecycle and the newly setup obd device can now be used for communications between subsystems in Lustre.

<span id="code:obd_import" label="code:obd_import">Source code 15: obd_import structure defined in include/lustre_import.h

class_precleanup and class_cleanup


Lustre unmount process begins from the  function defined as part of the   structure (shown in Source Code 16). The  function accepts a   from which the metadata and data exports for the   are extracted using the   routine. The  flag from   structure is set to indicate that cleanup will be performed even though the obd reference count is greater than zero. Then it periodically checks and waits to finish until there are no outstanding requests from vfs layer.

<span id="code:lustre_super_operations" label="code:lustre_super_operations">Source code 16: lustre_super_operations structure defined in llite/super25.c

The cleanup cycle then invokes the  routine defined in. This function obtains the  and profile name corresponding to the   using   and   functions respectively. Next it invokes  routine by passing the super block, profile name and a config llog instance initialized here. The  function defined in   ensures to stop following updates for the config log corresponding to the config llog instance passed. resets lustre config buffers and calls  by passing the lcfg command   and MGC as obd device. This results in the invocation of  which calls   method when   is passed. The  finds the config log and stops watching updates for the log.

Further  invokes   method which iterates through the obd devices with same group uuid and sets the   flag for all the devices. Afterwards it calls  routine which invokes obdclass functions   and   in the order. The  is invoked through   by passing the   command.

starts the shut down process of the obd device. This first sets the  flag to indicate that cleanup has started and then waits for any already arrived connection requests to complete. Once all the requests are completed it disconnects all the exports using  function (shown in Figure 14). It then invokes obd generic function  that ensures that all exports get destroyed. calls device specific precleanup function (e.g. ).   then destroys the ,  , and   hashtables and invokes   function. function asserts that all exports are destroyed.

then invokes  function by passing the   command. (defined in ) makes the   flag to zero and unregisters the device (frees the slot in   array) using   function. Next it invokes the  routine that destroys the last export (self export) by calling   method. calls  that frees the obd device using   function. calls device specific cleanup through  and finally invokes   routine that unloads the module. This is the end of the life cycle for the obd device. An end to end workflow of  routine is illustrated in Figure 15.

Imports and Exports




Obd devices in Lustre are components including lmv, lod, lov, mdc, mdd, mdt, mds, mgc, mgs, obdecho, ofd, osc, osd-ldsikfs, osd-zfs, osp, lwp, ost, and qmt. Among these mdc, mgc, osc, osp, and lwp are client obd devices meaning two server odb device components such as mdt and ost need one client device to establish communication between them. This is also applicable in case of a Lustre client communicating with Lustre servers. Client side obd devices consist of self export and import whereas server side obd devices consist of exports and reverse imports. A client obd device sends requests to the server using its import and the server receives requests using its export as illustrated in Figure 16. The imports on server obd devices are called reverse imports because they are used to send requests to the client obd devices. These requests are mostly callback requests sent by the server to clients infrequently. And the client uses it’s self export to receive these callback requests from the server.

For any two obd devices to communicate with each other, they need an import and export pair. For instance, let us consider the case of communication between ost and mdt obd devices. Logging into an OSS node and doing  shows the obd devices on the node and associated details (obd device status, type, name, uuid etc.). Examining  directory can also show the obd devices corresponding to various device types. An example of the name of an obd device created for the data exchange between OST5 and MDT2 will be. This means that the client obd device that enables the communication here is lwp. A conceptual view of the communication between ost and mdt through import and export connections is shown in Figure 17. LWP (Light Weight Proxy) obd device manages connections established from ost to mdt, and mdts to mdt0. An lwp device is used in Lustre to send quota and FLD query requests (see Section 7). Figure 17 also shows the communication between mdt and ost through osp client obd device.

The obdfilter directory from  lists osts present on the OSS node. All of these osts have their export connections listed in the nid format in their respective  directory. The export connection information is stored in a file called  in each of the export connections directory. Viewing the  file corresponding to MDT2 shows the following fields.


 * : Shows the name of the ost device.
 * : The nid of the client export connection. (nid of MDT2 in this example.)
 * : Flags representing various configurations for the lnet and ptl-rpc connections between the obd devices.
 * : Includes fields such as.
 * : Configuration flags for export connection.
 * : Represents target specific export data.

Useful APIs in Obdclass
All obdclass related function declarations are listed in the file  and their definitions can be seen in   Here we list some of the important obdclass function prototypes and their purpose for quick reference.


 * - Creates a new obd device, allocates and initializes it.
 * - Frees an obd device.
 * - Unregisters an obd device by feeing its slot in  array.
 * - Registers obd device by finding a free slot in in  array and filling it with the new obd device.
 * - Returns minor number corresponding to an obd device name.
 * - Returns pointer to an  structure corresponding to the device name.
 * - Returns minor number of an obd device when uuid is provided.
 * - Returns obd_device structure pointer corresponding to a uuid.
 * - Returns  structure corresponding to a minor number.
 * - Finds an obd device in the  array by name or uuid. Also increments obd reference count if its found.
 * - Gets the count of the obd devices in any state.
 * - Searches for a client obd connected to a target obd device.
 * - Destroys and export connection of an obd device.
 * - Creates a new export for an obd device and add its to the hash table of exports.

Introduction
Libcfs provides APIs comprising of fundamental primitives for process management and debugging support in Lustre. Libcfs is used throughout LNet, Lustre, and associated utilities. Its APIs define a portable run time environment that is implemented consistently on all supported build targets. Besides debugging support libcfs provides APIs for failure injection, Linux kernel compatibility, encryption for data, Linux 64 bit time addition, log collection using tracefile, string parsing support and capabilities for querying and manipulating CPU partition tables. Libcfs is the first module that Lustre loads. The module loading function can be found in  script as shown in Source Code 17. When Lustre is mounted,  function gets invoked and it calls   function. invokes  that loads Lustre modules libcfs, lnet, obdclass, ptl-rpc, fld, fid, and lmv in the same order.

In the following Sections we describe libcfs APIs and functionalities in detail.

Data Encryption Support in Libcfs
Lustre implements two types of encryption capabilities - data on the wire and data at rest. Encryption over the wire protects data transfers between the physical nodes from Man-in-the-middle attacks. Whereas the objective of encrypting data at rest is protection against storage theft and network snooping. Lustre 2.14+ releases provides encryption for data at rest. Data is encrypted on Lustre client before being sent to servers and decrypted upon reception from the servers. That way applications running on Lustre client see clear text and servers see only encrypted text. Hence access to encryption keys is limited to Lustre clients.

<span id="code:libcfs_loading" label="code:libcfs_loading">'''Source code 17: Libcfs module loading script (tests/test-framework.sh)

Data (at rest) encryption related algorithm and policy flags and data structures are defined in. The encryption algorithm macros are defined in Source Code 18. Definition of an encryption key structure shown in Source Code 19 includes a name, the raw key and size fields. Maximum size of the encryption key is limited to. This file also contains ioctl definitions to add and remove encryption keys, and obtain encryption policy, and key status.

<span id="code:encrypt_algo" label="code:encrypt_algo">Source code 18: Encryption algorithm macros defined in libcfs/include/uapi/linux/llcrypt.h

While userland headers for data encryption are listed in, the corresponding kernel headers can be found in. Some of the kernel APIs for data encryption are shown in Source Code 20. The definitions of these APIs can be found in.

Support functions for data encryption are defined in  file. These include:


 * - Releases a decryption context.
 * - Gets a decryption context.
 * - Frees a ciphertext bounce page.
 * - Encrypts or decrypts a single file system block of file contents.
 * - Encrypts file system blocks from a page cache page.
 * - Encrypts a file system block in place.
 * - Decrypts file system blocks in a page cache page.
 * - Decrypts a file system block in place.

Setup and cleanup functions for file system encryption are also defined here. implements functions to encrypt and decrypt filenames, allocate and free buffers for file name encryption and to convert a file name from disk space to user space. and  implement functions to manage cryptographic master keys and   provides APIs to find supported policies, check the equivalency of two policies and policy context management.

<span id="code:llcrypt_key" label="code:llcrypt_key">Source code 19: llcrypt_key structure defined in libcfs/include/uapi/linux/llcrypt.h

APIs and data structures that provide support for data encryption over the wire are listed in. The data structures include definitions for hash algorithm type, name, size and key and enum for various hash algorithms such as  etc. Key APIs that help with data encryption are listed below.


 * - This function returns hash algorithm related information for the specified algorithm identifier. Hash information includes algorithm name, initial seed and hash size.
 * - This returns hash name for hash algorithm identifier.
 * - Returns digest size for hash algorithm type.
 * - Finds hash algorithm ID for the specified algorithm name.
 * - Returns crypt algorithm information for the specified algorithm identifier.
 * - Returns crypt name for crypt algorithm identifier.
 * - Returns key size for crypto algorithm type.
 * - Finds crypto algorithm ID for the specified algorithm name.

Current version of Lustre supports only file name encryption whereas in future Lustre plans to extend the encryption capability for file contents as well.

CPU Partition Table Management
Libcfs includes APIs and data structures that help with CPU partition table management in Lustre. A CPU partition is a virtual processing unit and can have 1-N cores or 1-N NUMA nodes. Therefore a CPU partition is also viewed as a pool of processors.

<span id="code:kernel_encr_apis" label="code:kernel_encr_apis">Source code 20: Kernel APIs for data encryption defined in libcfs/include/libcfs/crypto/llcrypt.h

A CPU partition table (CPT) consists of a set of CPU partitions. CPTs can have two modes of operation, NUMA and SMP denoted by  and   respectively. Users can specify total number of CPU partitions while creating a CPT and ID of a CPU partition always starts from 0. For example: if there are 8 cores in the system, creating a CPT:

with cpu_npartitions=4: core[0, 1] = partition[0], core[2, 3] = partition[1] core[4, 5] = partition[2], core[6, 7] = partition[3] cpu_npartitions=1: core[0, 1, ... 7] = partition[0] Users can also specify CPU partitions by a string pattern.

cpu_partitions=&quot;0[0,1], 1[2,3]&quot; cpu_partitions=&quot;N 0[0-3], 1[4-8]&quot; The first character “N” means following numbers are NUMA IDs. By default, Lustre modules should refer to the global, instead of accessing hardware CPUs directly, so concurrency of Lustre can be configured by   of the global.

Source Code 21 and Source Code 22 show data structures that define CPU partition and CPT. A CPU partition consists of fields representing CPU mask and node mask  for the partition, NUMA distance between CPTs, spread rotor for NUMA allocator  and NUMA node if   is empty. Number of CPU partitions, structure representing partition tables and masks to represent all CPUs and nodes are the significant fields in the  structure.

<span id="code:cfs_cpu_partition" label="code:cfs_cpu_partition">Source code 21: cfs_cpu_partition structure defined in libcfs/libcfs/libcfs_cpu.c

<span id="code:cfs_cpt_table" label="code:cfs_cpt_table">Source code 22: cfs_cpt_table structure defined in libcfs/libcfs/libcfs_cpu.c

Libcfs provides the following APIs to access and manipulate CPU partitions and CPTs.


 * - Allocates a CPT given the number of CPU partitions.
 * - Frees a CPT corresponding to the given reference.
 * - Prints a CPT corresponding to the given reference.
 * - Returns number of CPU partitions in a CPT.
 * - returns the number of online CPTs.
 * - Calculates the maximum NUMA distance between all nodes in the from_mask and all nodes in the to_mask.

Additionally libcfs includes functions to initialize and remove CPUs, set and unset node masks and add and delete CPUs and nodes. Per CPU data and partition variables management functions are located in  file.

Debugging Support and Failure Injection
Lustre debugging infrastructure contains a number of macros that can be used to report errors and warnings. The debugging macros are defined in. are examples of the debugging macros. Complete list of the debugging macros and their detailed description can be found in this link.

Failure macros defined in  are used to deliberately inject failure conditions in Lustre for testing purposes. are examples of such failure macros (see Source Code 23). Libcfs module defines the failure macros starting with the keyword  whereas Lustre redefines them in   file starting with the keyword. The hex values representing these failure macros are used in the  command inject specific failures. Instances of  macro usage can be seen in   file.

<span id="code:CFS_FAIL" label="code:CFS_FAIL">Source code 23: CFS_FAIL macros defined in libcfs/include/libcfs/libcfs_fail.h

Additional Supporting Software in Libcfs
Files located in  furnish additional supporting software for Lustre for having 64bit time, atomics, extended arrays and spin locks.


 * - Implementation of portable time API for Linux for both kernel and user-level.
 * - Implements a variant of  specialized for reference counts.
 * - Implementation of large array of pointers that has the functionality of resizable arrays.
 * - Provides,   and   capabilities for spin locks.

File Identifier (FID)
Lustre refers to all the data that it stores as objects. This includes not only the individual components of a striped file but also such things as directory entries, internal configuration files, etc. To identify an object, Lustre assigns a File IDentifier (FID) to the object that is unique across the file system. A FID is a 128-bit number that consists of three components: a 64-bit sequence number, a 32-bit object ID, and a 32-bit version number. The data structure for a FID is shown in Source Code 24. As noted in the code, the version number is not currently used but is reserved for future purposes.

<span id="code:struct_lu_fid" label="code:struct_lu_fid"> Source Code 24: FID structure (include/uapi/linux/lustre/lustre_user.h)

Sequence numbers are controlled by the Lustre file system and allocated to clients. The entire space of sequence numbers is overseen by the sequence controller that runs on MDT0, and every storage target (MDTs and OSTs) runs a sequence manager. As the file system is started and the storage targets are brought online, each sequence manager contacts the sequence controller to obtain a unique range of sequence numbers (known as a super sequence). Every client that establishes a connection to a storage target will be granted a unique sequence number by the target’s sequence manager. This ensures that no two clients share a sequence number and that the same sequence number will always map to the same storage target.

When a client creates a new object on a storage target, the client allocates a new FID to use for the object. The FID is created by using the sequence number granted to the client by the storage target and adding a unique object ID chosen by the client. The client maintains a counter for each sequence number and increments that counter when a new object ID is needed. This combination of target-specific sequence number and client-chosen object ID (along with a version number of zero) is used to populate the  structure for the new object. It should be noted that FIDs are never reused within the same Lustre file system (with a few exceptions for special internal-only objects). If a client exhausts a sequence number and cannot create more FIDs, the client will contact the target and request a new sequence number.

It is important to understand that the use of the term “client” in this context does not just refer to Lustre file system clients that present the POSIX file system interface to end-users. A FID client is any node that is responsible for creating new objects, and this can include other Lustre servers. When a Lustre file system client uses the POSIX interface to create a new file, it will use a sequence number granted by an MDT target to construct a FID for the new file. This FID will be used to identify the object on the MDT that corresponds to this new file. However, the MDS server hosting the MDT will use the layout configuration for this new file to allocate objects on one or more OSTs that will contain the actual file data. In this scenario, the MDS is acting as a FID client to the OST targets. The MDS server will have been granted sequence numbers by the OST targets and use these sequence numbers to generate the FIDs that identify all the OST objects associated with the file layout.

Reserved Sequence Numbers and Object IDs
The sequence controller does not allocate certain sequence numbers to the sequence managers. These sequence numbers are reserved for special uses such as testing or compatibility with older Lustre versions. Information about these reserved sequence numbers can be found in. Below is a list of sequence ranges used by Lustre:

This range is reserved for compatibility with older Lustre versions that previously identified MDT objects using the ext3 inode number on the backend file system. Since those inode values were only 32-bit integers, a FID can be generated for these older objects by simply using the inode number as the sequence number. Since ext3 reserves inodes 0-11 for internal purposed, these sequence numbers will be used for other internal     purposes by Lustre.
 * IGIF (Inode and Generation In FID): sequence range = [12, 2^{32} - 1]

This range is used for compatibility to distinguish OST objects allocated from MDT0000 with sequence 0. Bit 33 of the FID Sequence is set to 1, and the OST index along with the high 16 bits of the object number are encoded into the lower 32 bits of the sequence number. The low 32 bits of the OST Object ID is stored in the FID OID.
 * IDIF (object ID In FID): sequence range = [2^{32}, 2^{33} - 1]

Used to identify existing objects allocated by MDT0000 on old formatted OSTs before the introduction of FID-on-OST.
 * OST_MDT0: sequence = 0

Used internally for Lustre Log objects.
 * LLOG: sequence = 1

Used for testing OST IO performance to avoid conflicting with any "real" data objects.
 * ECHO: sequence = 2

Used for testing file systems with multiple MDTs prior to the release of DNE. These have never been used in production.
 * OST_MDT1 ... OSTMAX: sequence range = [3-9]


 * Normal Sequences: sequence range = [2^{33}, 2^{64} - 1] This is the sequence range used in normal production and allocated to the sequence manager and clients. NOTE: The first 1024 sequence numbers in this range are reserved for system use.

The header file also contains some predefined object IDs that are used for local files such as user/group/project accounting logs, LFSCK checkpoints, etc. These are part of the  enumeration, a portion of which is shown in Source Code 25.

<span id="code:enum_local_oid" label="code:enum_local_oid"> Source Code 25: Portion of local_oid enumeration (include/lustre_fid.h)

Unless otherwise noted, the remainder of this chapter will focus on FIDs that use Normal Sequences or ones reserved for special internal objects. It will not deal with sequences reserved for compatibility reasons (IGIF, IDIF, etc).

Kernel Module
FID-related functions are built into the  kernel module. The source code for this module is located in the  directory. For Lustre clients, two files are used to build this module:  and. The  file just contains functions needed to support debugfs and won’t be discussed in detail here.

The  file contains the core functions needed to support the FID client functionality. The module entry/exit points are  and , but these functions just call   and   to add/remove the necessary debugfs entries. The real initialization starts in the  function. This function is registered as part of the OBD operations to be invoked by the MDC and OSP subsystems. The function’s main responsibility is to allocate memory for a  structure which is then passed to   (an abbreviated version of which is shown in Source Code 26) where the structure is initialized. The cleanup routine starts in  which then calls. These two functions decrement the appropriate reference counts on other structures and free up the memory allocated to the  structure.

There are only two other functions exported by the  module:   and. The  function is mainly used by the OSP subsystem when requesting a new sequence number that will be used for precreating objects. The  function is used to request a new  FID from the client’s currently allocated sequence. If the FID values in the current sequence are exhausted, a call is made to  to request a new sequence number.

<span id="code:func_seq_client_init" label="code:func_seq_client_init"> Source Code 26: Function seq_client_init used in fid module initialization

For the sequence controller and sequence manager nodes, the  kernel module includes code from three additional files: ,  , and. The  file defines some special sequence ranges and reserved FIDs. The  file contains functions used to persist sequence information to backend storage. The functions defined in this file are not exported by the module and are just used internally by the code in. The  file contains the functions used by FID servers to handle requests from FID  clients for sequence number allocations. The  module entry/exit points (  and  ) make calls to   and   to handle server-specific initialization and cleanup. Server requests are handled by the functions  and.

FID Location Database (FLD)
Since no two storage targets ever share the same sequence numbers, a client can determine the location of an object based on the sequence number in the object's FID. To do this, a lookup table that maps sequence numbers to storage targets must be maintained. This lookup table is called the FID Location Database. The full FLD is stored on MDT0, but each Lustre server will maintain a subset of the FLD for the sequences assigned to it. Clients can send queries to a server to request a FID lookup in the server FLD. The response is then added to the client’s local FLD cache to speed future lookups.

The code related to FLD is contained in the  directory. The  kernel module is built from the following source files:


 * - Contains structure definitions for FLD caching (as well as the function declarations used in the other   files.
 * - Defines functions for debugfs support.
 * - Defines functions for caching results of FLD lookups on clients. These functions are only for internal use and are not exported by the module.
 * - Defines module entry/exit points ( and  ) as well as the    function used for looking up FLD entries.
 * - Included only by FLD servers. Defines functions for managing the FLD database itself. Only the  function is exported for external use. All the other functions in this file are for internal use by the   module.
 * - Included only by FLD servers. Defines the functions needed to handle FLD queries from clients.

Object Index (OI)
The Object Storage Device (OSD) layer acts as an abstraction between the MDT/OST layers and the underlying backend file system (ldiskfs or ZFS) used to store the actual objects. Although Lustre may use FIDs to reference all objects, the backend file system does not. It  is the responsibility of the OSD abstraction layer to convert a Lustre FID into a storage cookie that can be used by the backend file   system to locate the desired object. The term “storage cookie” is used to represent some identifier that is specific to the type of backend file system being used. In the case of ldiskfs, the storage cookie consists of the file system inode and generation number and is encapsulated in the   structure shown in Source Code 27.

<span id="code:struct_osd_inode_id" label="code:struct_osd_inode_id"> Source Code 27: osd_inode_id structure (osd-ldiskfs/osd_oi.h)

The OSD layer must maintain a mapping between Lustre FIDs and the corresponding storage cookie. This mapping is referred to as the Object Index (OI). For ldiskfs, OI-related functions are declared in the header file  as shown in Source Code 28. The functions themselves are defined in. The OI is implemented using the Index Access Module (IAM) functions defined in the  source files. For the ZFS backend, similar functionality is provided by code in  although the implementation details differ from osd-ldiskfs.

<span id="code:osd_oi_funcs" label="code:osd_oi_funcs"> Source Code 28: Functions for interacting with the Object Index (OI)

Publications
An initial version of this documentation is published as a Technical Report in osti.gov. The technical report can be found here: Understanding Lustre Internals, Second Edition. Please cite this documentation if it helps your work on Lustre file systems.

Presentations

 * [[Media:Anjus_George_LUG2022_tutorial.pdf|Lustre MGC and Obdclass Deep Dive]](video) - Lustre User Group (LUG 2022) tutorial
 * [[Media:LUG2022-Understanding_Lustre_Internals-George.pdf|Understanding Lustre File System Internals – A Documentation Initiative]] (video) - Lustre User Group (LUG 2022) technical talk

Authors

 * Anjus George, ORNL
 * Rick Mohr, ORNL