Lustre Monitoring and Statistics Guide

Introduction
This guide is by Scott Nolin (scott.nolin@ssec.wisc.edu), of the University of Wisconsin Space Science and Engineering Center.

There are a variety of useful statistics and counters available on Lustre servers and clients. This is an attempt to detail some of these statistics and methods for collecting and working with them.

This does not include Lustre log analysis.

The presumed audience for this is system administrators attempting to better understand and monitor their Lustre file systems.

Adding to This Guide
If you have improvements, corrections, or more information to share on this topic please contribute to this page. Ideally this would become a community resource.

Lustre Versions
This information is based on working primarily with Lustre 2.4 and 2.5.

Reading /proc vs lctl
'cat /proc/fs/lustre...' vs 'lctl get_param'

With newer Lustre versions, 'lctl get_pram' is the standard and recommended way to get these stats. This is to insure portability. I will use this method in all examples, a bonus is it can be often be a little shorter syntax.

Data Formats
Format of the various statistics type files varies (and I'm not sure if there is any reason for this). The format names here are entirely *my invention*, this isn't a standard for Lustre or anything.

It is useful to know the various formats of these files so you can parse the data and collect for use in other tools.

Stats
What I consider a "standard" stats files include for example each OST or MDT as a multi-line record, and then just the data.

Example: obdfilter.scratch-OST0001.stats= snapshot_time            1409777887.590578 secs.usecs read_bytes               27846475 samples [bytes] 4096 1048576 14421705314304 write_bytes              16230483 samples [bytes] 1 1048576 14761109479164 get_info                 3735777 samples [reqs]

The basic format of each line of the stats files is:

{name of statistic} {count of events} samples [{units}]

Some statistics also contain min/max/average values:

{name of statistic} {count of events} samples [{units}] {minimum value} {maximum value} {sum of values}

The average (mean value) value can be computed from {sum of values}/{count of events} since it isn't possible to do floating-point math in the kernel.

Some statistics also contain standard deviation data:

{name of statistic} {count of events} samples [{units}] {minimum value} {maximum value} {sum of values} {sum of value squared}

The standard deviation can be computed from sqrt({sum of values squared} - {mean value}²).

snapshot_time = when the stats were written For read_bytes and write_bytes:
 * First number = number of times (samples) the OST has handled a read or write.
 * Second number = the minimum read/write size
 * Third number = maximum read/write size
 * Fourth = sum of all the read/write requests in bytes, the quantity of data read/written.

Jobstats
Jobstats are slightly more complex multi-line records. They look like JSON, except for the (-) blocks for each job. Each OST or MDT also has an entry for each jobid (or procname_uid perhaps), and then the data.

Example: obdfilter.scratch-OST0000.job_stats=job_stats: - job_id:         56744 snapshot_time:  1409778251 read:           { samples:       18722, unit: bytes, min:    4096, max: 1048576, sum:     17105657856 } write:          { samples:         478, unit: bytes, min:    1238, max: 1048576, sum:       412545938 } setattr:        { samples:           0, unit:  reqs }  punch:           { samples:          95, unit:  reqs } - job_id:. . . ETC

Notice this is very similar to 'stats' above.

Single
These really boil down to just a single number in a file. But if you use "lctl get_param" you get an output that is nice for parsing. For example: [COMMAND LINE]# lctl get_param osd-ldiskfs.*OST*.kbytesavail

osd-ldiskfs.scratch-OST0000.kbytesavail=10563714384 osd-ldiskfs.scratch-OST0001.kbytesavail=10457322540 osd-ldiskfs.scratch-OST0002.kbytesavail=10585374532

Histogram
Some stats are histograms, these types aren't covered here. Typically they're useful on their own without further parsing(?)


 * brw_stats
 * extent_stats

Interesting Statistics Files
This is a collection of various stats files that I have found useful. It is *not* complete or exhaustive. For example, you will noticed these are mostly server stats. There are a wealth of client stats too not detailed here. Additions or corrections are welcome.


 * Host Type = MDS, OSS, client
 * Target = "lctl get_param target"
 * Format = data format discussed above

Working With the Data
Packages, tools, and techniques for working with Lustre statistics.

Open Source Monitoring Packages

 * LMT - provides 'top' style monitoring of server nodes, and historical data via mysql. https://github.com/chaos/lmt
 * lltop and xltop - monitoring with batch scheduler integration. Newer Lustre versions with jobstats likely provide similar data very conveniently, but these are still very good for examples of working with monitoring data. https://github.com/jhammond/lltop https://github.com/jhammond/xltop

Build it Yourself
Here are basic steps and techniques for working with the Lustre statistics.


 * 1) Gather the data on hosts you are monitoring. Deal with the syntax, extract what you want
 * 2) Collect the data centrally - either pull or push it to your server, or collection of monitoring servers.
 * 3) Process the data - this may be optional or minimal.
 * 4) Alert on the data - optional but often useful.
 * 5) Present the data - allow for visualization, analysis, etc.

Some recent tools for working with metrics and time series data have made some of the more difficult parts of this task relatively easy, especially graphical presentation.

Here are details of some solutions tested or in use:

Collectl and Ganglia
Collectl supports Lustre stats. Note there have recently been some changes, Lustre support in collectl is moving to plugins: http://sourceforge.net/p/collectl/mailman/message/31992463 https://github.com/pcpiela/collectl-lustre

This process is not based on the new versions, but they should work similarly.


 * 1) collectl - does the gather by writing to a text file on the host being monitored
 * 2) ganglia does the collect via gmond and python script 'collectl.py' and present via ganglia web pages - there is no alerting.

See https://wiki.rocksclusters.org/wiki/index.php/Roy_Dragseth#Integrating_collectl_and_ganglia

Perl and Graphite
Graphite is a very convenient tool for storing, working with, and rendering graphs of time-series data. At SSEC we did a quick prototype for collecting and sending MDS and OSS data using perl. The choice of perl is not particularly important, python or the tool of your choice is fine.

Software Used:
 * Graphite and Carbon - http://graphite.readthedocs.org/en/latest/
 * Lustrestats.pm - perl module to parse different types of lustre stats, used by lustrestats scripts
 * lustrestats scripts - these are simply run every minute via cron on the servers you monitor. For the SSEC prototype we simply sent text data via a TCP socket. The check_mk scripts in the next section have replaced these original test scripts.
 * Grafana - http://grafana.org - this is a dashboard and graph editor for graphite. It is not required, as graphite can be used directly, but is very convenient. I allows for not just ease of creating dashboards, but also encoruages rapid interactive analysis of the data. Note that elasticsearch can be used to store dashboards for grafana, but is not required.

check_mk and Graphite
Another option is instead of directly sending with perl, use a check_mk local agent check.

The local agent and pnp4nagios mean a reasonable infrastructure is already in place for alerting and also collecting performance data.

While collecting via perl allowed us to send the timestamp from the Lustre stats (when they exist) directly to Carbon, Graphite's data collection tool. When using the check_mk method this timestamp is lost, so timestamps are then based on when the local agent check runs. This will introduce some inaccuracy - a delay of up to your sample rate.

Collecting via both methods allows you to see this difference. This graph shows all the "export" stats summed for each method, with derivative applied to create a rate of change. "CMK" is the check_mk data and "timestamped" was from the perl script. Plotting the raw counter data of course shows very little, but with this derived data you can see the difference.

This data was sampled once per minute:



For our uses at SSEC, this was acceptable. Sampling much more frequently will of course make the error smaller.


 * Graphite - http://graphite.readthedocs.org/en/latest/
 * Lustrestats.pm - perl module to parse different types of lustre stats, used by lustrestats scripts
 * OMD - check_mk, nagios, pnp4nagios
 * check_mk local scripts - these are called via check_mk, at whatever rate is desired. http://www.ssec.wisc.edu/~scottn/files/lustre_stats_mds.cmk http://www.ssec.wisc.edu/~scottn/files/lustre_stats_oss.cmk
 * graphios https://github.com/shawn-sterling/graphios - a python script to send your nagios performance data to graphite
 * Grafana - http://grafana.org - not required, but convenient for dashboards.

Grafana Lustre Dashboard Screenshots:



Logstash, python, and Graphite
Brock Palen discusses this method: http://www.failureasaservice.com/2014/10/lustre-stats-with-graphite-and-logstash.html

Collectd plugin and Graphite
This talk mentions a custom collectd plugin to send stats to graphite: http://www.opensfs.org/wp-content/uploads/2014/04/D3_S31_FineGrainedFileSystemMonitoringwithLustreJobstat.pdf

Unsure if the source for that plugin is available.

A Note about Jobstats
If using a whisper or RRD-file based solution, jobstats may not be a great fit. The strength of RRD or Whisper files are you have a set size for each metric collected. If your metrics are now per-job as opposed to only per-export or per-server, this means your number of metrics is now growing without bound.

Solutions anyone?

References and Links

 * http://cdn.opensfs.org/wp-content/uploads/2015/04/Lustre-Metrics-New-Techniques-for-Monitoring_Nolin_Wagner.pdf
 * Daniel Kobras, "Lustre - Finding the Lustre Filesystem Bottleneck", LAD2012. http://www.eofs.eu/fileadmin/lad2012/06_Daniel_Kobras_S_C_Lustre_FS_Bottleneck.pdf
 * Florent Thery, "Centralized Lustre Monitoring on Bull Platforms", LAD2013. http://www.eofs.eu/fileadmin/lad2013/slides/11_Florent_Thery_LAD2013-lustre-bull-monitoring.pdf
 * Daniel Rodwell and Patrick Fitzhenry, "Fine-Grained File System Monitoring with Lustre Jobstat", LUG2014. http://www.opensfs.org/wp-content/uploads/2014/04/D3_S31_FineGrainedFileSystemMonitoringwithLustreJobstat.pdf
 * Gabriele Paciucci and Andrew Uselton, "Monitoring the Lustre* file system to maintain optimal performance", LAD2013. http://www.eofs.eu/fileadmin/lad2013/slides/15_Gabriele_Paciucci_LAD13_Monitoring_05.pdf
 * Christopher Morrone, "LMT Lustre Monitoring Tools", LUG2011. http://cdn.opensfs.org/wp-content/uploads/2012/12/400-430_Chris_Morrone_LMT_v2.pdf


 * https://github.com/jhammond/lltop
 * https://github.com/chaos/lmt
 * https://github.com/chaos/cerebro
 * http://graphite.readthedocs.org/en/latest/
 * https://mathias-kettner.de/check_mk
 * https://github.com/shawn-sterling/graphios