PARALLEL DATA LAB 

PDL Abstract

Six Degrees of Scientific Data: Reading Patterns for Extreme Scale
Science IO

20th ACM Int. Symp. On High-Performance Parallel and Distributed Computing (HPDC'11), June 2011.

Jay Lofstead†, Milo Polte, Garth A. Gibson, Scott A. Klasky**, Karsten Schwan*, Ron Oldfield†,
Matthew Wolf**, Qing Liu
**

School of Computer Science
Carnegie Mellon University
Pittsburgh, PA 15213

*Georgia Institute of Technology
**Oak Ridge National Lab
†Sandia National Labs

http://www.pdl.cmu.edu/

Petascale science simulations generate 10s of TBs of application data per day, much of it devoted to their checkpoint/ restart fault tolerance mechanisms. Previous work demonstrated the importance of carefully managing such output to prevent application slowdown due to IO blocking, resource contention negatively impacting simulation performance and to fully exploit the IO bandwidth available to the petascale machine. This paper takes a further step in understanding and managing extreme-scale IO. Specifically, its evaluations seek to understand how to efficiently read data for subsequent data analysis, visualization, check-point restart after a failure, and other read-intensive operations. In their entirety, these actions support the "end-to-end" needs of scientists enabling the scientific processes being undertaken. Contributions include the following. First, working with application scientists, we define 'read' benchmarks that capture the common read patterns used by analysis codes. Second, these read patterns are used to evaluate different IO techniques at scale to understand the effects of alternative data sizes and organizations in relation to the performance seen by end users. Third, defining the novel notion of a 'data district' to characterize how data is organized for reads, we experimentally compare the read performance seen with the ADIOS middleware's log-based BP format to that seen by the logically contiguous NetCDF or HDF5 formats commonly used by analysis tools. Measurements assess the performance seen across patterns and with different data sizes, organizations, and read process counts. Outcomes demonstrate that high endto- end IO performance requires data organizations that offer flexibility in data layout and placement on parallel storage targets, including in ways that can make tradeoffs in the performance of data writes vs. reads.

FULL PAPER: pdf