ZFS OSD

Background
The purpose of this page is to document architecture and requirements related to Lustre servers using the ZFS DMU interfaces. What features are needed from ZFS as more storage management features are added to Lustre, and vice versa. This page documents such features and requirements.

The DMU offers benefits for Lustre but it is not a perfect marriage. The approach described below was based on:
 * 1) low risk: use methods we know
 * 2) time to market: stick with methods we use under ldiskfs
 * 3) low controversy: start with something that ZFS can deliver without modifications
 * 4) few initial enhancements: there are a few ZFS enhancements that would be highly beneficial

File system formats
The Lustre servers will interface with the DMU in such a way that the disk images that are created can be mounted as ZFS file systems. While this is not strictly necessary, it will retain the transparency of where the data is, and allow debugging Lustre-ZFS filesystems by mounting them locally on the server.

ZFS has an exceptionally rich "fork" feature (similar to extended attributes), and this can be used to store the Lustre-specific extended attributes. It also has native support for key-value index objects (ZAPs) that can be used, for example, as object indexes in a way consistent with ZFS directories.

OST

 * Object index: we stick with a directory hierarchy under O/. Because sequences will be important, the hierarchy will be /O/ /, where and are formatted in variable-width hexadecimal ASCII numbers.  The last part of the pathname points to a ZFS regular file.
 * Reference to MDS FID: Each object requires a reference to the MDS FID to which it belongs. For this we need a relatively small extended attribute stored as a ZFS system attribute.
 * Size on MDS, HSM: also requires extended attributes on the objects and are stored as separate system attributes
 * Larger blocks: we believe that for HPC applications larger blocks of at least 1MB are desirable for performance reasons.

MDT

 * FID to object mapping: We propose to use ZAP OI files for this purpose, hashed over several ZAP files to reduce locking contention.
 * Readdir: readdir must return the FID to the client. The FID was put in an xattr of the dnode in the first implementation, but it leads to every dnode being read during readdir.  For improved performance the FID is also stored in the ZAP directory entry for the name as two 64-bit integers after the dnode number.  This allows the FID to be returned efficiently just via directory traversal.  The ZFS-on-Linux code is modified to ignore the extra integers after the dnode number. See also ZFS TinyZAP.
 * File layout: this needs to go into a system attribute if small, or an extended attribute if it is large (which may be slower). Using larger dnodes seems the right way to go, but these require changes to the ZFS on-disk format.

ZFS system attributes
ZFS has an extended attribute model that is very general and supports large extended attributes.

One issue is that the ZFS xattr model provides no protection for xattrs stored on a file and a user would be able to corrupt the Lustre EA data with enough effort, even if it is owned by root. The system attributes (SA) feature separates internal attributes from user attributes and avoids this issue. The SA feature is available since the ZFS 0.5 release.

Larger dnodes with embedded xattrs
To avoid much indirection to other data structures as is currently seen with ZFS xattrs, larger dnodes in ZFS which can contain small xattrs (large enough for most Lustre xattrs) are very attractive. The Large dnode pool feature was landed for the ZFS 0.7 release. See also ZFS large dnodes.

Larger block size
For HPC applications, 128K is probably a blocksize that is considerably too small, especially considering Lustre will send at least 1MB of data per RPC if it is available. ZFS should use larger block sizes by default for more efficiency. The ZFS 1MB Block Size feature was landed for the 0.6.3 release.

Read / Write priorities
ZFS has a simple table to control read/write priorities. Given that writes mostly go to cache and are flushed by background daemons, while reads block applications, reads are often given higher priority, with limitations to prevent starving writes. Henry Newman raised concerns that for the HPCS file system this policy is not necessarily ideal. Bill Moore explained that it is simple to change it through settings in a table.

Data and Metadata on Separate Devices
Past parallel file systems and current efforts with MAID arrays have found significant advantages in file systems that store data on separate block devices from metadata. Some users of Lustre already place the ldiskfs journal on a separate device.

In ZFS this is relatively easy to arrange by introducing new classes of VDEVs. The block allocator would choose a metadata class VDEV if it was allocating metadata and an file-data class if it was allocating for file data. See Jeff Bonwick's blog entry about block allocation and the pull request Metadata Allocation Classes. This feature was developed by Intel and was landed for the ZFS 0.8 release.

Parity Declustering
Simple parity declustering patterns should be supported for large VDEVs in order to reduce rebuild times. The Parity declustered RAIDz/mirror feature was developed by Intel and is work is under way to complete it for landing in the ZFS 0.9 release.