OST Pool Quotas HLD

From Lustre Wiki
Revision as of 12:27, 12 March 2020 by Sergey (talk | contribs) (The beginning! Will continue to full this page tomorrow.)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to navigation Jump to search

Introduction

This document specifies the High Level Design (HLD) for Lustre pool quotas

The main purposes of this document are:

  • Define the requirements for pool-bassed quotas in Lustre
  • Outline the strategy to implement pool quotas

The design use case for pool quotas will be for providing user and group usage restrictions on an SSD tier in a mixed SSD/HDD system.

Audience

This HLD provides guidance to developers who will implement the stated functionality. This document can also be used to communicate the high-level design and design considerations to other team members.

Functional Requirements

Context

With heterogenous clusters consisting of some all-flash OSTs and some all-disk OSTs, it becomes desirable to limit an individual's rights to consume the higher-performance OSTs. The Lustre pools feature allows for the grouping of similar OSTs into performance tiers, and allocating files into tiers. However, it provides no methods to limit the usage of more desirable / expensive / smaller capacity tiers.

Quota controls are the natural solution to administrative limits on space resource. However, quotas in Lustre today are limited to filesystem-wide quota limits on a per-user, per-group, or per-project basis. This design proposes to extent Lustre's quotas capabilities to control allocations within pools.

User Requirements

ID Description Notes
R.poolqoutas.poolChange Changes in pool definitions should dynamically affect remaining pool quotas Both OST addition to and removal from a pool
R.poolquotas.capacity Pool quotas should limit space / capacity We state no requirement for per-pool inode quotas (this is a moderately desirable stretch goal if simple)
R.poolquotas.disable An administrator should be able to disable (and re-enable) quotas accounting for a particular pool
R.poolquotas.multiPool Any OST may be part of multiple pools. An OST may be part of multiple pools, each with different pool quotas
R.poolquotas.perGroup Each group may have a unique pool quota
R.poolquotas.perPoolQuotas Each pool may have a unique set of user, group, and project quotas
R.poolquotas.perProject Each project may have a unique pool quota
R.poolquotas.perUser Each user may have a unique pool quota
R.poolquotas.poolDel Removal of a pool should cause associated quotas limits to be removed

R.poolquotas.multiPool notes the possibility that an OST may be part of multiple pools, each with different pool quotas. Logically, this is resolved by meeting all applicable quota limits, that is, the allowable quota is the minimum of all applicable remaining quotas.

Interface Requirements

Existing LFS controls for quota controls should be extended to cover pool quotas. LFS quota reporting must also be extended to support pool quotas. As "-p" and "-P" are already used to specify project and default project quotas, "-o" will be used for Pools (the 2nd character of "pool"). For "lfs setquota" will be used only short option "-o". For "lfs quota" only long option "–pool" as -o is also already used to specify UUID.

Usage

Standard LFS quota controls should be used to set and check pool quotas.

Create or change a pool quota for a user

# lfs setquota -u bob -o flash --block-hardlimit 2g /mnt/lustre

This sets a 2GB limit for user Bob on the pool named "flash".

For a group:

# lfs setquota -g interns -o flash --block-hardlimit 2g /mnt/lustre

For comparison, the command to set a user's total capacity limit:

# lfs setquota -u bob --block-hardlimit 2g /mnt/lustre

Report quotas for a user