PARALLEL DATA LAB 

PDL Abstract

...And eat it too: High read performance in write-optimized HPC I/O middleware file formats

4th Petascale Data Storage Workshop held in conjunction with Supercomputing '09, November 15, 2009. Portland, Oregon. Supercedes Carnegie Mellon University Parallel Data Lab Technical Report CMU-PDL-09-111, November 2009.

Milo Polte^, Jay Lofstead†, John Bent‡, Garth A. Gibson, Scott A. Klasky§, Qing Liu§, Manish Parashar*, Norbert Podhorszki§, Karsten Schwan†, Meghan Wingate‡, Matthew Wolf†

^ Carnegie Mellon University
† Georgia Institute of Technology
‡ Los Alamos National Lab
§ Oak Ridge National Lab
* Rutgers University

As HPC applications run on increasingly high process counts on larger and larger machines, both the frequency of checkpoints needed for fault tolerance and the resolution and size of Data Analysis Dumps are expected to increase proportionally. In order to maintain an acceptable ratio of time spent performing useful computation work to time spent performing I/O, write bandwidth to the underlying storage system must increase proportionally to this increase in the checkpoint and computation size. Unfortunately, popular scientific self-describing file formats such as netCDF and HDF5 are designed with a focus on portability and exibility. Extra care and careful crafting of the output structure and API calls is required to optimize for write performance using these APIs. To provide sufficient write bandwidth to continue to support the demands of scientific applications, the HPC community has developed a number of I/O middleware layers, that structure output into write-optimized file formats. However, the obvious concern with any write optimized file format would be a corresponding penalty on reads. In the log-structured filesystem, for example, a file generated by random writes could be written efficiently, but reading the file back sequentially later would result in very poor performance. Simulation results require efficient read-back for visualization and analytics, and though most checkpoint files are never used, the efficiency of a restart is very important in the face of inevitable failures. The utility of write speed improving middleware would be greatly diminished if it sacrificed acceptable read performance. In this paper we examine the read performance of two write-optimized middleware layers on large parallel machines and compare it to reading data natively in popular file formats.

FULL PAPER: pdf