THIS PAGE HAS MOVED. PLEASE UPDATE YOUR BOOKMARKS. IF YOU ARE NOT REDIRECTED IN A FEW SECONDS, PLEASE CLICK HERE TO GO TO OUR NEW PAGE.
The Parallel Data Lab is participating in the Scalable I/O initiative, a collective research effort involving several universities and research labs to address the needs of I/O intensive applications. The context of this work evolved from focussing on improving I/O performance for massively parallel and large multiprocessor systems to include other forms of parallel computing such as networks of workstations. A list of participating organizations can be found here.
The scalable I/O initiative involves several working groups which are focussing on applications, performance evalaution, compilers and languages, operating and file systems, and integration and testbeds. The CMU Parallel Data Laboratory is active in the operating and file systems working group.
Our participation concentrates on three main thrust areas:
Development of a SIO-LLAPI compliant parallel file system for networks of workstations We use NASDs (networks attached secure disks) as our storage servers and Unix workstations as clients. The file system is a user-level (library) file system which is fully SIO-LLAPI compliant and which provides striping and RAID across NASD drives. We expect to finish the implementation of this parallel SIO file system in a few months. This file system is layered on top of Cheops, a scalable storage service using NASDs, which provides customizable per-file layouts, decntralized striping/RAID, storage rearrangement, without introducing store-and-forward servers. Clients still directly access NASDs through the Cheops clerk, a library linked in with client applications.
Developing parallel applications We are currently developing a parallel FFT implementation which uses the SIO-LLAPI file system. The FFT application exhibits concurrent write-sharing behavior as well as strided and asynchronous IO which are supported by the underlying file system. More information about applications can be found here.