Scalable I/O

The Parallel Data Lab is participating in the Scalable I/O initiative, a collective research effort involving several universities and research labs to address the needs of I/O intensive applications. The context of this work evolved from focussing on improving I/O performance for massively parallel and large multiprocessor systems to include other forms of parallel computing such as networks of workstations. A list of participating organizations can be found here.

The scalable I/O initiative involves several working groups which are focussing on applications, performance evalaution, compilers and languages, operating and file systems, and integration and testbeds. The CMU Parallel Data Laboratory is active in the operating and file systems working group.

Our participation concentrates on three main thrust areas:

    Specification of SIO-LLAPI, a scalable I/O low-level application programming interface We have been an active participant in the drafting of the Scalable IO low-level application programming interface(SIO-LLAPI). The SIO-LLAPI was formally announced to the parallel computing community at Supercomputing '96. An overview of the API as well as related papers can be found here.

    Development of a SIO-LLAPI compliant parallel file system for networks of workstations We use NASDs (networks attached secure disks) as our storage servers and Unix workstations as clients. The file system is a user-level (library) file system which is fully SIO-LLAPI compliant and which provides striping and RAID across NASD drives. We expect to finish the implementation of this parallel SIO file system in a few months. This file system is layered on top of Cheops, a scalable storage service using NASDs, which provides customizable per-file layouts, decntralized striping/RAID, storage rearrangement, without introducing store-and-forward servers. Clients still directly access NASDs through the Cheops clerk, a library linked in with client applications.

    Developing parallel applications We are currently developing a parallel FFT implementation which uses the SIO-LLAPI file system. The FFT application exhibits concurrent write-sharing behavior as well as strided and asynchronous IO which are supported by the underlying file system. More information about applications can be found here.

Acknowledgements

We thank the members and companies of the PDL Consortium: Amazon, Facebook, Google, Hewlett Packard Enterprise, Hitachi Ltd., Intel Corporation, IBM, Microsoft Research, NetApp, Inc., Oracle Corporation, Pure Storage, Salesforce, Samsung Semiconductor Inc., Seagate Technology, Two Sigma, and Western Digital for their interest, insights, feedback, and support.