Lustre User Group 2016

From Lustre Wiki
Revision as of 13:23, 18 April 2016 by KenRawlings (talk | contribs) (Initial creation)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to navigation Jump to search

LUG 2016 was held in Portland, Oregon CO, from April 5th-7th, 2016

Day 1 – Tuesday, April 5

  • Welcome Remarks - Stephen Simms, LUG Program Chair
  • Lustre 101: A Quick Overview - Stephen Simms, LUG Program Chair
  • Community Release Update - Peter Jones, LWG Co-Chair
  • Lustre 2.9 and Beyond - Andreas Dilger, Intel
  • Improved Versioning, Building, Packaging, and Distribution of Lustre - Christopher Morrone, Lawrence Livermore National Laboratory, Giuseppe Di Natale, Lawrence Livermore National Laboratory
  • LMT: It’s Only A Flesh Wound - Olaf Faaland, Lawrence Livermore National Laboratory
  • Enhancing Lustre Security with the Whitelist/Blacklist Patch - Josh Judd, Warp Mechanics
  • Lustre Security Infrastructure Today and Tomorrow - John Hammond, Intel
  • Security Isolation for Lustre - Sebastien Buisson, DDN
  • Lustre Update from Seagate - Peter Bojanic, Seagate
  • Large Streaming IO for Lustre on ZFS - Jinshan Xiong, Intel
  • Lustre ZFS Snaphots - Fan Yong, Intel
  • Vectorized ZFS RAIDZ Implementation - Gvozden Nešković, Frankfurt Institute for Advanced Studies
  • Status of the Upstream Client - James Simmons, Oak Ridge National Laboratory
  • Removing Technical Debt - Ben Evans, Cray
  • Intel(r) Solutions for Lustre* Software: Update - Jessica Popp, Intel
  • Lustre.Org – A Community Resource - Ken Rawlings, Lustre.org Working Group OpenSFS Liaison
  • OpenSFS and EOFS Update - Charlie Carroll, Cray, OpenSFS Board Member, Hugo Falter, EOFS Board Member

Day 2 – Wednesday, April 6

  • Lustre in a Condo Computing Environment - John White, Lawrence Berkeley National Laboratory
  • Lustre Deployed Three Different Ways to Meet Researchers’ Needs - Scott Yockel, Harvard University
  • Tiering Storage (around Lustre) for Big Data and Supercomputing: Cray’s Portfolio of Offerings - Jason Goodman, Cray Inc.
  • The 1 Million IOPS Lustre File System at TU Dresden - Michael Kluge, Technische Universität Dresden Johann Peyrard, Atos
  • Building Lustre Solutions with NVM Devices Today - James Coomer, DataDirect Networks
  • Lustre Deployment using Intel OmniPath Interconnect - Brian Johanson, Pittsburgh Supercomputing Center J. Ray Scott, Pittsburgh Supercomputing Center
  • InfiniBand At A Distance - Steve Woods, Cray, Dave McMillen, Cray
  • Multi-rail LNet for Lustre - Amir Shehata, Intel Olaf Weber, SGI
  • A New Approach to Lustre – Join HPE for a Look at the Future of Optimized Lustre Environments - Craig Belusar, Hewlett Packard Enterprise
  • An Architecture for Docker on Lustre - Blake Caldwell, Oak Ridge National Laboratory
  • Scaling Apache Spark on Lustre - Nicholas Chaimov, University of Oregon
  • Scaling LDISKFS for the Future - Artem Blagodarenko, Seagate
  • DL-SNAP: A Directory Level SNAPSHOT Facility on Lustre - Shinji Sumimoto, Fujitsu
  • Board Q&A Panel

Day 3 – Thursday, April 7

  • Developing an Open Source Object Storage Copytool for HSM on Lustre - Frederick Lefebvre, Calcul Québec Simon Guilbault, Calcul Québec
  • Lustre Data Mover: Because File Systems are Rooted Trees and rsync Must Die - Rick Wagner, San Diego Supercomputer Center
  • Project Quota for Lustre - Li Xi, DDN
  • ASCAR: Increasing Performance Through Automated Contention Management - Yan Li, University of California, Santa Cruz
  • Evaluating Progressive File Layouts for Lustre - Richard Mohr, University of Tennessee, Knoxville
  • Closing - Stephen Simms, LUG Program Chair