Understanding Lustre Internals

What is Lustre?
Lustre is a GNU General Public licensed, open-source distributed parallel file system developed and maintained by DataDirect Networks (DDN). Due to the extremely scalable architecture of the Lustre file system, Lustre deployments are popular in scientific supercomputing, as well as in the oil and gas, manufacturing, rich media, and finance sectors. Lustre presents a POSIX interface to its clients with parallel access capabilities to the shared file objects. As of this writing, Lustre is the most widely used file system on the top 500 fastest computers in the world. Lustre is the file system of choice on 7 out of the top 10 fastest computers in the world today, over 70% of the top 100, and also for over 60% of the top 500.

Lustre Features
Lustre is designed for scalability and performance. The aggregate storage capacity and file system bandwidth can be scaled up by adding more servers to the file system, and performance for parallel applications can often be increased by utilizing more Lustre clients. Some practical limits are shown in Table 1 along with values from known production file systems.

Lustre has several features that enhance performance, usability, and stability. Some of these features include:


 * POSIX Compliance: With few exceptions, Lustre passes the full POSIX test suite. Most operations are atomic to ensure that clients do not see stale data or metadata. Lustre also supports mmap file IO.


 * Online file system checking: Lustre provides a file system checker (LFSCK) to detect and correct file system inconsistencies. LFSCK can be run while the file system in online and in production, minimizing potential downtime.


 * Controlled file layouts: The file layouts that determine how data is placed across the Lustre servers can be customized on a per-file basis. This allows users to optimize the layout to best fit their specific use case.


 * Support for multiple backend file systems: When formatting a Lustre file system, the underlying storage can be formatted as either ldiskfs (a performance-enhanced version of ext4) or ZFS.


 * Support for high-performance and heterogeneous networks: Lustre can utilize RDMA over low latency networks such as Infiniband or Intel OmniPath in addition to supporting TCP over commodity networks. The Lustre networking layer provides the ability to route traffic between multiple networks making it feasible to run a single site-wide Lustre file system.


 * High-availability: Lustre supports active/active failover of storage resources and multiple mount protection (MMP) to guard against errors that may results from mounting the storage simultaneously on multiple servers. High availability software such as Pacemaker/Corosync can be used to provide automatic failover capabilities.


 * Security features: Lustre follows the normal UNIX file system security model enhanced with POSIX ACLs. The root squash feature limits the ability of Lustre clients to perform privileged operations. Lustre also supports the configuration of Shared-Secret Key (SSK) security.


 * Capacity growth: File system capacity can be increased by adding additional storage for data and metadata while the file system in online.

Lustre Components
Lustre is an object-based file system that consists of several components:




 * Management Server (MGS) - Provides configuration information for the file system. When mounting the file system, the Lustre clients will contact the MGS to retrieve details on how the file system is configured (what servers are part of the file system, failover information, etc.). The MGS can also proactively notify clients about changes in the file system configuration and plays a role in the Lustre recovery process.


 * Management Target (MGT) - Block device used by the MGS to persistently store Lustre file system configuration information. It typically only requires a relatively small amount of space (on the order to 100 MB).


 * Metadata Server (MDS) - Manages the file system namespace and provides metadata services to clients such as filename lookup, directory information, file layouts, and access permissions. The file system will contain at least one MDS but may contain more.


 * Metadata Target (MDT) - Block device used by an MDS to store metadata information. A Lustre file system will contain at least one MDT which holds the root of the file system, but it may contain multiple MDTs. Common configurations will use one MDT per MDS server, but it is possible for an MDS to host multiple MDTs. MDTs can be shared among multiple MDSs to support failover, but each MDT can only be mounted by one MDS at any given time.


 * Object Storage Server (OSS) - Stores file data objects and makes the file contents available to Lustre clients. A file system will typically have many OSS nodes to provide a higher aggregate capacity and network bandwidth.


 * Object Storage Target (OST) - Block device used by an OSS node to store the contents of user files. An OSS node will often host several OSTs. These OSTs may be shared among multiple hosts, but just like MDTs, each OST can only be mounted on a single OSS at any given time. The total capacity of the file system is the sum of all the individual OST capacities.


 * Lustre Client - Mounts the Lustre file system and makes the contents of the namespace visible to the users. There may be hundreds or even thousands of clients accessing a single Lustre file sysyem. Each client can also mount more than one Lustre file system at a time.


 * Lustre Networking (LNet) - Network protocol used for communication between Lustre clients and servers. Supports RDMA on low-latency networks and routing between heterogeneous networks.

The collection of MGS, MDS, and OSS nodes are sometimes referred to as the “frontend”. The individual OSTs and MDTs must be formatted with a local file system in order for Lustre to store data and metadata on those block devices. Currently, only ldiskfs (a modified version of ext4) and ZFS are supported for this purpose. The choice of ldiskfs or ZFS if often referred to as the “backend file system”. Lustre provides an abstraction layer for these backend file systems to allow for the possibility of including other types of backend file systems in the future.

Figure 1 shows a simplified version of the Lustre file system components in a basic cluster. In this figure, the MGS server is distinct from the MDS servers, but for small file systems, the MGS and MDS may be combined into a single server and the MGT may coexist on the same block device as the primary MDT.

Lustre File Layouts


Lustre stores file data by splitting the file contents into chunks and then storing those chunks across the storage targets. By spreading the file across multiple targets, the file size can exceed the capacity of any one storage target. It also allows clients to access parts of the file from multiple Lustre servers simultaneously, effectively scaling up the bandwidth of the file system. Users have the ability to control many aspects of the file’s layout by means of the  command, and they can query the layout for an existing file using the command.

File layouts fall into one of two categories:


 * 1) Normal / RAID0 - File data is striped across multiple OSTs in a round-robin manner.
 * 2) Composite - Complex layouts that involve several components with potentially different striping patterns.

Normal (RAID0) Layouts
A normal layout is characterized by a stripe count and a stripe size. The stripe count determines how many OSTs will be used to store the file data, while the stripe size determines how much data will be written to an OST before moving to the next OST in the layout. As an example, consider the file layouts shown in Figure 2 for a simple file system with 3 OSTs residing on 3 different OSS nodes. Note that Lustre indexes the OSTs starting at zero.

File A has a stripe count of three, so it will utilize all OSTs in the file system. We will assume that it uses the default Lustre stripe size of 1MB. When File A is written, the first 1MB chunk gets written to OST0. Lustre then writes the second 1MB chunk of the file to OST1 and the third chunk to OST2. When the file exceeds 3 MB in size, Lustre will round-robin back to the first allocated OST and write the fourth 1MB chunk to OST0, followed by OST1, etc. This illustrates how Lustre writes data in a RAID0 manner for a file. It should be noted that although File A has three chunks of data on OST0 (chunks #1, #4, and #7), all these chunks reside in a single object on the backend file system. From Lustre’s point of view, File A consists of three objects, one per OST. Files B and C show layouts with the default Lustre stripe count of one, but only File B uses the default stripe size of 1MB. The layout for File C has been modified to use a larger stripe size of 2MB. If both File B and File C are 2MB in size, File B will be treated as two consecutive chunks written to the same OST whereas File C will be treated as a single chunk. However, this difference is mostly irrelevant since both files will still consist of a single 2MB object on their respective OSTs.

Composite Layouts
A composite layout consists of one or more components each with their own specific layout. The most basic composite layout is a Progressive File Layout (PFL). Using PFL, a user can specify the same parameters used for a normal RAID0 layout but additionally specify a start and end point for that RAID0 layout. A PFL can be viewed as an array of normal layouts each of which covers a consecutive non-overlapping region of the file. PFL allows the data placement to change as the file increases in size, and because Lustre uses delayed instantiation, storage for subsequent components is allocated only when needed. This is particularly useful for increasing the stripe count of a file as the file grows in size.

The concept of a PFL has been extended to include two other layouts: Data on MDT (DoM) and Self Extending Layout (SEL). A DoM layout is specified just like a PFL except that the first component of the file resides on the same MDT as the file’s metadata. This it typically used to store small amounts of data for quick access. A SEL is just like a PFL with the addition that an extent size can be supplied for one or more of the components. When a component is instantiated, Lustre only instantiates part of the component to cover the extent size. When this limit is exceeded, Lustre examines the OSTs assigned to the component to determine if any of them are running low on space. If not, the component is extended by the extent size. However, if an OST does run low on space, Lustre can dynamically shorten the current component and choose a different set of OSTs to use for the next component of the layout. This can safeguard against full OSTs that might generate a  error when a user attempts to append data to a file.

Lustre has a feature called File Level Redundancy (FLR) that allows a user to create one or more mirrors of a file, each with its own specific layout (either normal or composite). When the file layout is inspected using, it appears like any other composite layout. However, the  field is used to identify which mirror each component belongs to.

Distributed Namespace


The metatdata for the root of the Lustre file system resides on the primary MDT. By default, the metadata for newly created files and directories will reside on the same MDT as that of the parent directory, so without any configuration changes, the metadata for the entire file system would reside on a single MDT. In recent versions, a featured called Distributed Namespace (DNE) was added to allow Lustre to utilize multiple MDTs and thus scale up metadata operations. DNE was implemented in multiple phases, and DNE Phase 1 is referred to as Remote Directories. Remote Directories allow a Lustre administrator to assign a new subdirectory to a different MDT if its parent directory resides on MDT0. Any files or directories created in the remote directory also reside on the same MDT as the remote directory. This creates a static fan-out of directories from the primary MDT to other MDTs in the file system. While this does allow Lustre to spread overall metadata operations across mutliple servers, operations with any single directory are still constrained by the performance of a single MDS node. The static nature also prevents any sort of dynamic load balancing across MDTs.

DNE Phase 2, also known as Striped Directories, removed some of these limitations. For a striped directory, the metadata for all files and subdirectories contained in that directory are spread across multiple MDTs. Similar to how a file layout contains a stripe count, a striped directory also has a stripe count. This determines how many MDTs will be used to spread out the metadata. However, unlike file layouts which spread data across OSTs in a round-robin manner, a striped directory uses a hash function to calculate the MDT where the metadata should be placed. The upcoming DNE Phase 3 expands upon the ideas in DNE Phase 2 to support the creation of auto-striped directories. An auto-striped directory will start with a stripe count of 1 and then dynamically increase the stripe count as the number of files/subdirectories in that directory grows. Users can then utilize striped directories without knowing a priori how big the directory might become or having to worry about choosing a directory stripe count that is too low or too high.

File Identifiers and Layout Attributes


Lustre identifies all objects in the file system through the use of File Identifiers (FIDs). A FID is a 128-bit opaque identifier used to uniquely reference an object in the file system in much the same way that ext4 uses inodes or ZFS uses dnodes. When a user accesses a file, the filename is used to lookup the correct directory entry which in turn provides the FID for the MDT object corresponding to that file. The MDT object contains a set of extended attributes, one of which is called the Layout Extended Attribute (or Layout EA). This Layout EA acts as a map for the client to determine where the file data is actually stored, and it contains a list of the OSTs as well as the FIDs for the objects on those OSTs that hold the actual file data. Figure 3 shows an example of accessing a file with a normal layout of stripe count 3.

Lustre Software Stack


The Lustre software stack is composed of several different layered components. To provide context for more detailed discussions later, a basic diagram of these components is illustrated in Figure 4. The arrows in this diagram represent the flow of a request from a client to the Lustre servers. System calls for operations like read and write go through the Linux Virtual File System (VFS) layer to the Lustre LLITE layer which implements the necessary VFS operations. If the request requires metadata access, it is routed to the Logical Metadata Volume (LMV) that acts as an abstraction layer for the Metadata Client (MDC) components. There is a MDC component for each MDT target in the file system. Similarly, requests for data are routed to the Logical Object Volume (LOV) which acts as an abstraction layer for all of the Object Storage Client (OSC) components. There is an OSC component for each OST target in the file system. Finally, the requests are sent to the Lustre servers by first going through the Portal RPC (PTL-RPC) subsystem and then over the wire via the Lustre Networking (LNet) subsystem.

Requests arriving at the Lustre servers follow the reverse path from the LNet subsystem up through the PTL-RPC layer, finally arriving at either the OSS component (for data requests) or the MDS component (for metadata requests). Both the OSS and MDS components are multi-threaded and can handle requests for multiple storage targets (OSTs or MDTs) on the same server. Any locking requests are passed to the Lustre Distributed Lock Manager (LDLM). Data requests are passed to the OBD Filter Device (OFD) and then to the Object Storage Device (OSD). Metadata requests go from the MDS straight to the OSD. In both cases, the OSD is responsible for interfacing with the backend file system (either ldiskfs or ZFS) through the Linux VFS layer.

Figure 5 provides a simple illustration of the interactions in the Lustre software stack for a client requesting file data. The Portal RPC and LNet layers are represented by the arrows showing communications between the client and the servers. The client begins by sending a request through the MDC to the MDS to open the file. The MDS server responds with the Layout EA for the file. Using this information, the client can determine which OST objects hold the file data and send requests through the LOV/OSC layer to the OSS servers to access the data.

Detailed Discussion of Lustre Components
The descriptions of key Lustre concepts provided in this overview are intended to provide a basis for the more detailed discussion in subsequent Sections. The remaining Sections dive deeper into the following topics:


 * Section 2 (Tests): Describes the testing framework used to test Lustre functionality and detect regressions.
 * Section 3 (Utils): Covers command line utilities used to format and configure a Lustre file systems as well as user tools for setting file striping parameters.
 * Section 4 (MGC): Discusses the MGC subsystem responsible for communications between Lustre nodes and the Lustre management server.
 * Section 5 (Obdclass): Discusses the obdclass subsystem that provides an abstraction layer for other Lustre components including MGC, MDC, OSC, LOV, and LMV.
 * Section 6 (Libcfs): Covers APIs used for process management and debugging support.
 * Section 7 (File Identifiers, FID Location Database, and Object Index): Explains how object identifiers are generated and mapped to data on the backend storage.

This document extensively references parts of Lustre source code maintained by open source community.

Introduction
The Lustre client software involves primarily three components, management client (MGC), a metadata client (MDC), and multiple object storage clients (OSCs), one corresponding to each OST in the file system. Among this, the management client acts as an interface between Lustre virtual file system layer and Lustre management server (MGS). Lustre targets register with MGS to register information with MGS while Lustre clients contact MGS to retrieve information from it.

The major functionalities of MGC are Lustre log handling, Lustre distributed lock management and file system setup. MGC is the first obd device created in Lustre obd device life cycle. An obd device in Lustre provides a level of abstraction on Lustre components such that generic operations can be applied without knowing the specific devices you are dealing with. The remaining Sections in this Chapter describe MGC module initialization, various MGC obd operations and log handling in detail. In the following Sections we will be using the terms clients and servers to represent service clients and servers created to communicate between various components in Lustre. Whereas the physical nodes representing Lustre’s clients and servers will be explicitly mentioned as ‘Lustre clients’ and ‘Lustre servers’.

MGC Module Initialization
When the MGC module initializes, it registers MGC as an obd device type with Lustre using  as shown in Source Code 1. Obd device data and metadata operations are defined using the  and   structures respectively. Since MGC deals with metadata in Lustre, it has only  operations defined. However the metadata client (MDC) has both metadata and data operations defined since the data operations are used to implement Data on Metadata (DoM) functionality in Lustre. The  function passes ,  ,  ,  , and   as its arguments. is defined as  in.

Source code 1: class_register_type function defined in obdclass/genops.c

MGC obd Operations


MGC obd operations are defined by  structure as shown in Source Code 2. Note that all MGC obd operations are defined as function pointers. This type of programming style avoids complex switch cases and provides a level of abstraction on Lustre components such that the generic operations can be applied without knowing the details of specific obd devices.

Source Code 2: mgc_obd_ops structure defined in mgc/mgc_request.c

In Lustre one of the ways two subsystems share data is with the help of  structure. To understand how the communication between two subsystems work let us take an example of  from the   structure. The subsystem llite makes a call to  (in  ) by passing a key  as an argument. But notice that llite invokes  instead of. is defined in  as shown in Figure 6. We can see that this function invokes an  macro by passing an   device structure and a   operation. The definition of this macro concatenates  with   (operation) so that the resulting function call becomes.



So how does llite make sure that this operation is directed specifically towards MGC obd device? from  has an argument called. The  structure is a type of   defined in   (refer Figure 7). And the  field from   is a type of   structure defined in. structure has a field  which is an   structure (defined in  ). Another MGC obd operation  retrieves export using the   structure. Two functions involved in this process are  and   defined in.

In the following Sections we describe some of the important MGC obd operations in detail.

mgc_setup
is the initial routine that gets executed to start and setup the MGC obd device. In Lustre MGC is the first obd device that is being setup as part of the obd device life cycle process. To understand when  gets invoked in the obd device life cycle, let us explore the workflow from the Lustre module initialization.



The Lustre module initialization begins from the  routine defined in   (shown in Figure 8). This routine is invoked when the  module gets loaded. invokes  which registers   as the file system and adds it to the list of file systems the kernel is aware of for mount and other syscalls. structure is defined in the same file as shown in Source Code 3.

When a user mounts Lustre, the  gets invoked as evident from this structure. is defined in the same file and which in turn calls  routine. The  invokes its call back function   which is also defined in. is the entry point for the mount call from the Lustre client into Lustre.

Source code 3: lustre_fs_type structure defined in llite/super25.c

invokes  defined in. This sets up the MGC obd device to start processing startup logs. The  routine called here starts the MGC obd device (defined in  ). eventually leads to the invocation of obdclass specific routines  and   (described in detail in the Section 5) with the help of a   routine that takes obd device name and a lustre configuration command  as arguments. Various lustre configuration commands are  and so on. These are defined in  as shown in Source Code 4.

Source code 4: Lustre configuration commands defined in include/uapi/linux/lustre/lustre_cfg.h

The first  that is being passed to   routine is   which will result in the invocation of obdclass function. We will describe  in detail in Section 5. The second  passed to   function is   which will result in the invocation of   eventually. calls  (defined in  ) and passes the   that it received. In case of  command the   routine gets invoked. This is defined in the same file and its primary duty is to create hashes and self export and call obd device specific setup. The device specific setup call is in turn invoked through another routine called. is defined in  as an inline function in the same way   is defined. calls the device specific setup routine with the help of the  macro (refer Section 4.3 and Figure 6). Here, in case of MGC obd device  defined as part of the   structure (shown in Source Code 2) gets invoked by the   routine. Note that the yellow colored blocks in Figure 8 will be referenced again in Section 5 to illustrate the lifecycle of the MGC obd device.

Operation
first adds a reference to the underlying Lustre PTL-RPC layer. Then it sets up an RPC client for the obd device using  (defined in  ). Next  initializes Lustre logs which will be processed by MGC at the MGS server. These logs are also sent to the Lustre client and the client side MGC mirrors these logs to process the data. The tunable parameters persistently set at MGS are sent to MGC and Lustre logs processed at the MGC initializes these parameters. In Lustre the tunables have to be set before Lustre logs are processed and  helps to initialize these tunables. Few examples of the tunables set by this function are,  ,   and   and can be viewed in   directory by logging into any Lustre client. starts an  which keeps reading the lustre logs as the entries come in. A flowchart showing the  workflow is shown in Figure 10.

Lustre Log Handling
Lustre extensively makes use of logging for recovery and distributed transaction commits. The logs associated with Lustre are called  and config logs, startup logs and change logs correspond to various kinds of. As described in Section 3.2.4, the  utility can be used to read these Lustre logs. When a Lustre target registers with MGS, the MGS constructs a log for the target. Similarly, a  log is created for the Lustre client when it is mounted. When a user mounts the Lustre client, it triggers to download the Lustre config logs on the client. As described earlier MGC subsystem is responsible for reading and processing the logs and sending them to Lustre clients and Lustre servers.

Log Processing in MGC


The  routine described in Section 4.4 makes a call to   function defined in. This function initializes a config log instance specific to the super block passed from. Since the same MGC may be used to follow multiple config logs (e.g. ost1, ost2, Lustre client), the config log instance is used to keep the state for a specific log. Afterwards  invokes   which gets a config log from MGS and starts processing it. gets called for both Lustre clients and Lustre servers and it continues to process new statements appended to the logs. It first resets and allocates  (which temporarily store log data) and calls   which eventually invokes the obd device specific   (as shown in Figure 9) with the help of   macro. The  passed to   is   which gets the config log from MGC, starts processing it and adds the log to list of logs to follow. defined in the same file accomplishes the task of adding the log to the list of active logs watched for updates by MGC. Few other important log processing functions in MGC are -  (that gets a configuration log from the MGS and processes it),   (called if the Lustre client was notified for target restarting by the MGS), and   (applies the logs after recovery).

mgc_precleanup and mgc_cleanup


Cleanup functions are important in Lustre in case of file system unmounting or any unexpected errors during file system setup. The  routine defined in   starts the process of shutting down an obd device. This invokes  (through  ) which makes sure that all the exports are destroyed before shutting down the obd device. first decrements the  that was incremented during. The  keeps the count of the running MGC threads and makes sure not to shut down any threads prematurely. Next it waits for any requeue thread to gets completed and calls. destroys client side import interface of the obd device. Finally  invokes   which cleans up the lustre logs associated with the MGC. The log cleaning is accomplished by  routine defined in.

function deletes the profiles for the last MGC obd using  defined in. When MGS sends a buffer of data to MGC, the lustre profiles helps to identify the intended recipients of the data. Next the  routine (defined in  ) removes   and   entries for the obd device. It then decrements the reference to PTL-RPC layer and finally calls. This function (defined in ) makes the obd namespace point to NULL, destroys the client side import interface and finally frees up the obd device using   macro. Figure 10 shows the workflows for both setup and cleanup routines in MGC parallely. The  routine defined in   starts the MGC shut down process. Note that after the,   and   hashtables are freed up and destroyed. HT stores uuids for different obd devices where as  HT stores ptl-rpc network connection information.

mgc_import_event
The  function handles the events reported at the MGC import interface. The type of import events identified by MGC are listed in  enum defined in   as shown in Source Code 5. Client side imports are used by the clients to communicate with the exports on the server (for instance if MDS wants to communicate with MGS, MDS will be using its client import to communicate with MGS’ server side export). More detailed description of import and export interfaces on obd device is given in Section 5.

Source code 5: obd_import_event enum defined in include/lustre_import.h

Some of the remaining obd operations for MGC such as,  ,   and   will be explained in obdclass and ldlm Sections.

Introduction
The obdclass subsystem in Lustre provides an abstraction layer that allows generic operations to be applied on Lustre components without having the knowledge of specific components. MGC, MDC, OSC, LOV, LMV are examples of obd devices in Lustre that make use of the obdclass generic abstraction layer. The obd devices can be connected in different ways to form client-server pairs for internal communication and data exchange in Lustre. Note that the client and server referred here are service clients and servers roles temporarily assumed by the obd devices but not physical nodes representing Lustre clients and Lustre servers.

Obd devices in Lustre are stored internally in an array defined in  as shown in Source Code 6. The maximum number of obd devices in Lustre per node is limited by  defined in   (shown in Source Code 7). The obd devices in the  array are indexed using an   number (see Source Code 8). An obd device can be identified using its minor number, name or uuid. A uuid is a unique identifier that Lustre assigns for obd devices. utility (described in Section 3.2.3) can be used to view all local obd devices and their uuids on Lustre clients and Lustre servers.

Source code 6: obd_devs array defined in obdclass/genops.c

Source code 7: MAX_OBD_DEVICES defined in include/obd.h

obd_device Structure
The structure that defines an obd device is shown in Source Code 8.

Source Code 8: obd_device structure defined in include/obd.h

The first field in this structure is  as shown in Source Code 11 that defines the type of the obd device - a metadata or bulk data device or both. is used to identify data corruption with an obd device. Lustre assigns a magic number to the obd device during its creation phase and later asserts it in different parts of the source code to make sure that it returns the same magic number to ensure data integrity. As described in previous Section  is the index of the obd device in   array. An  entry indicates if the obd device is a real device such as an   or   type of (block) device. and  fields are used for uuid and name of the obd device as the field names suggest. structure also includes various flags to indicate the current status of the obd device. Some of those are  - means completed attach,   - finished setup,   - recovery expired,   - started cleanup,   - started setup and so on. and  are   and   hash tables for the obd device respectively. An obd device is also associated with several linked lists pointing to,  ,   and. Some of the remaining relevant fields of this structure are, kset and kobject device model abstractions, timeouts for recovery, proc entries, directory entry, procfs and debugfs variables.

MGC Life Cycle
As described in Section 4 MGC is the first obd device setup and started by Lustre in the obd device life cycle. To understand the lifecycle of MGC obd device let us start from the generic file system mount function. is directly invoked by the  system call from the user and handles the generic portion of mounting a file system. It then invokes file system specific mount function, that is  in case of Lustre. The  defined in   invokes the kernel function   as shown in Source Code 9 which invokes   as its call back function.

Source code 9: lustre_mount function defined in llite/llite_lib.c

function is the entry point for the mount call into Lustre. This function initializes lustre superblock, which is used by the MGC to write a local copy of config log. The  routine calls   which initializes a config log instance specific for the superblock. The  structure is defined in   as shown in Source Code 10. The  field in this structure is unique to this superblock. This unique  is obtained using   function defined in. The  structure also has a uuid (obtained from   field of   structure defined in  ) and a callback handler defined by the function. We will come back to this callback handler later in the MGC life cycle process. The color coded blocks in Figure 11 were also part of  call graph shown in Figure 8 in Section 4.

Source code 10: config_llog_instance structure is defined in include/obd_class.h



The file system name field of   structure is populated by copying the profile name obtained using the   function. defined in  obtains a profile name corresponding to the mount command issued from the user from the   structure.

Then  then invokes the   function (see Figure 11) which gets the config logs from MGS and starts processing them. This function is called from both Lustre clients and Lustre servers and it will continue to process new statements appended to the logs. is defined in. The three parameters passed to this function are superblock, logname and config log instance. The config instance is unique to the super block which is used by the MGC to write to the local copy of the config log and the logname is the name of the llog to be replicated from the MGS. The config log instance is used to keep the state for the specific config log (can be from ost1, ost2, Lustre client etc.) and is added to the MGC’s list of logs to follow. then calls  that uses the   macro (refer Section 4.3) to call MGC specific   function. gets the config log from the MGS and processes it to start any services. Logs are also added to the list of logs to watch.

We now describe the detailed workflow of  by describing the functionalities of each sub-function that it invokes. The  function categorizes the data in config log based on if the data is related to - ptl-rpc layer, configuration parameters, nodemaps and barriers. The log data related to each of these categories is then copied to memory using the function. next calls  and it gets a config log from MGS and processes it. This function is called for both Lustre clients and Lustre servers to process the configuration log from the MGS. The MGC enqueues a DLM lock on the log from the MGS and if the lock gets revoked, MGC will be notified by the lock cancellation callback that the config log has changed, and will enqueue another MGS lock on it, and then continue processing the new additions to the end of the log. Lustre prevents the updation of the same log by multiple processes at the same time. The  then calls   function which reads the log and creates a local copy of the log on the Lustre client or Lustre server. This function first initializes an environment and a context using  and   respectively. The  routine is used to create a local copy of the log with the environment and context previously initialized. Real time changes in the log are parsed using the function. Under read only mode, there will be no local copy or local copy will be incomplete, so Lustre will try to use remote llog first.

The  function is defined in. The arguments passed to this function are the environment, context and config log instance initialized in  function and the config log name. The first log that is being parsed by the  function is. contains configuration information for various Lustre file system components, obd devices and file system mounting process. first acquires a lock on the log to be parsed using a handler function. It then continues the processing of the log from where it last stopped till the end of the log. To process the logs two entities are used by this function - 1) an index to parse through the data in the log, and 2) a callback function that processes and interprets the data. The call back function can be a generic handler function like  or it can be customized. Note that this is the call back handler initialized by the  structure as previously mentioned in Source Code 10. Additionally, the callback function provides a config marker functionality that allows to inject special flags for selective processing of data in the log. The callback handler also initializes  to temporarily store the log data. Afterwards the following actions take place in this function: translate log names to obd device names, append uuid with obd device name for each Lustre client mount and finally attach the obd device.

Each obd device then sets up a key to communicate with other devices through secure ptl-rpc layer. The rules for creating this key are stored in the config log. The obd device then creates a connection for communication. Note that the start log contains all state information for all configuration devices and the lustre configuration buffer stores this information temporarily. The obd device then use this buffer to consume log data. The start log resembles to a virtual log file and it is never stored on the disk. After creating a connection, the handler performs data mining on the logs to extract information (uuid, nid etc.) required to form Lustre. The parameter  passed with   function decides what type of information should be parsed from the logs. For instance  indicates the handler to scan obd device configuration information and   asks to parse for changelog records. Using the extracted nid and uuid information about the obd device, the handler now invokes  routine. This function repeats the cycle of obd device creation for other obd devices. Notice that the only obd device exists in Lustre at this point in the life cycle is MGC. The  function calls the generic obd class functions such as ,  ,   depending upon the   that it receives for a specific obd device.

Obd Device Life Cycle
In this Section we describe the work flow of various obd device life cycle functions such as,  ,  ,  , and.

class_attach
The first method that is called in the life cycle of an obd device is  and the corresponding lustre config command is. The  method is defined in. It registers and adds the obd device to the list of obd devices. The list of obd devices is defined in  using. The attach function first checks if the obd device type being passed is valid. The  structure is defined in   (as shown in Source code 11). Two types of operations defined in this structure are  (i.e., data operations) and   (i.e., metadata operations). These operations determine if the obd device is destined to perform data or metadata operations or both.

The  field of   structure makes sense only for real block devices such as   and   osd devices. Furthermore the  differentiates metadata and data devices using the tags   and   respectively. An example of an  structure defined for   is shown in Source Code 12.

Source code 11: obd_type structure defined in include/obd.h

Source code 12: lu_device_type structure for ldiskfs osd_device_type defined in osd-ldiskfs/osd_handler.c



The  then calls a   function which creates, allocates a new obd device and initializes it. A complete workflow of the  function is shown in Figure 12. The  function invoked by   registers already created obd device and loads the obd device module. All obd device loaded has metadata or data operations (or both) defined for them. For instance the LMV obd device has its  and   defined in structures   and   respectively. These structures and the associated operations can be seen in  file. The  initialized here is the index of the obd device in   array.

The obd device then creates a self export using the function. The  function invokes a   function which creates a new export, adds it to the hash table of exports and returns a pointer to it. Note that a self export is created only for a client obd device. The reference count for this export when created is 2, one for the hash table reference and the other for the pointer returned by this function itself. This function populates the  structure defined in   (shown in Source Code 13). Various fields associated with this structure are explained in the next Section. Two functions that are used to increment and decrement the reference count for obd devices are  and   respectively. The last part of  is registering/listing the obd device in the   array which is done through   function. This functions assigns a minor number to the obd device that can be used to lookup the device in the array.

obd_export Structure
This Section describes some of the relevant fields of the  structure (shown in Source Code 13) that represents a target side export connection (using ptlrpc layer) for obd devices in Lustre. This is also used to connect between layers on the same node when there is no network connection between the nodes. For every connected client there exists an export structure on the server attached to the same obd device. Various fields of this structure are described below.


 * - On connection establishment, the export handle id is provided to client and the subsequent client RPCs contain this handle id to identify which export they are talking to.
 * Set of counters described below is used to track where export references are kept.  is the number of RPC references,   counts commit callback references,   is the number of queued replay requests to be processed and   keeps track of the number of lock references.

Source code 13: obd_export structure defined in include/lustre_export.h


 * maintains a linked list of all the locks and  is the spinlock that protects this list.
 * is the UUID of client connected to this export.
 * links all the exports on an obd device.
 * is used when the export connection is destroyed.
 * The structure also maintains several hash tables to keep track of,   and last received messages in case of recovery from failure..
 * The obd device for this export is defined by the pointer.
 * - This defines the portal rpc connection for this export.
 * - This lists all the ldlm locks granted on this export.
 * This structure also has additional fields such as hashes for posix deadlock detection, time for last request received, linked list to replay all requests waiting to be replayed on recovery, lists for RPCs handled, blocking ldlm locks and special union to deal with target specific data.

class_setup


The primary duties of  routine are create hashtables and self-export, and invoke the obd type specific setup function. As an initial step this function obtains the obd device from  array using   number and asserts the   number to make sure data integrity. Then it sets the  flag to indicate that the set up of this obd device has started (refer Source Code 8). Next the  and   hashtables are setup using Linux kernel builtin functions   and. For the  hashtable Lustre uses its custom implementation of hashtable namely.

A generic device setup function  defined in   is then invoked by   by passing the   structure populated and the corresponding lcfg command. This leads to the invocation of device specific setup routines from various subsystems such as,  ,   and so on. All of these setup routines invoke a  routine that acts as a pre-setup stage before the creation of imports for the clients as shown in Figure 13. The  defined in   function populates   structure defined in   as shown in Source Code 14. Note that the  routine is called only in case of client obd devices like osp, lwp, mgc, osc, and mdc.

Source Code 14: client_obd structure defined in include/obd.h

structure is mainly used for page cache and extended attributes management. It comprises of fields pointing to obd device uuid and import interfaces, counter to keep track of client connections and fields to represent maximum and default extended attribute sizes. Few other fields used for cache handling are  - LRU cache for caching OSC pages,   - available LRU slots per OSC cache,   - number of busy LRU pages, and   - number of LRU pages in the cache for this client_obd. Please also refer source code to see additional fields in the structure.

The  then obtains an LDLM lock to setup the LDLM layer references for this client obd device. Further it sets up ptl-rpc request and reply portals using the  routine defined in. The  structure defines a pointer to the   structure defined in. The  structure represents ptl-rpc imports that are client-side view of remote targets. A new import connection for the obd device is created using the function. The  method populates   structure defined in   as shown in Source Code 15.

The  structure represents the client side view of a remote target. This structure mainly consists of fields representing ptl-rpc layer client and active connections on it, client side ldlm handle and various flags representing the status of imports such as,  , and. There are also linked lists pointing to lists of requests that are retained for replay, waiting for a reply, and waiting for recovery to complete.

The  then adds an initial connection for the obd device to the ptl-rpc layer by invoking   method. This method uses ptl-rpc layer specific routine  to return a ptl-rpc connection specific for the uuid passed for the remote obd device. Finally  creates a new ldlm namespace for the obd device that it just set up using the   routine. This completes the setup phase in the obd device lifecycle and the newly setup obd device can now be used for communications between subsystems in Lustre.

Source code 15: obd_import structure defined in include/lustre_import.h

Publications
An initial version of this documentation is published as a Technical Report in osti.gov. The technical report can be found here: Understanding Lustre Internals, Second Edition