Speaker: Tom Cormen, Dartmouth College
ViC*: A Compiler for Virtual-Memory C*Date: January 25, 1996
Abstract: Throughout the history of electronic computing, no matter how big and fast the top machines have been, there have always been applications that needed them to be bigger and faster. Even today, we see this phenomenon in parallel computing.
Over thirty years ago, computer architects devised virtual memory to solve this problem for sequential machines. We see two approaches in today's parallel machines:
1. Have no built-in support for virtual memory. Data is kept on a disk system, and programmers must code explicit disk accesses.
2. Run traditional sequential virtual memory on the individual nodes. This approach fails to take advantage of aggregate data-parallel operations, and hence it yields suboptimal performance.
Built-in virtual-memory support for data-parallel programming would allow the memory requirements of many application programs to exceed the available memory size without increasing software development time or software complexity. Yet, programmers would not need specialized knowledge of I/O-optimal algorithms in order to avoid huge performance penalties.
This talk describes a linguistic step toward such a solution. Our approach is based on the data-parallel language C*. A compiler, ViC*, transforms a C* program with parallel variables so large that they must reside on disk into a C program with explicit virtual-processor loops, explicit I/O, and library calls added. In a ViC* source program, any C* shape may be declared to be outofcore, which means that all parallel variables of this shape are out-of-core. The explicit I/O calls added by ViC* are direct parallel disk reads and writes of sections of out-of-core parallel variables. The library calls added by ViC* are typically for operations requiring communication in out-of-core parallel variables, e.g., reductions, gets, and sends.
This talk is a reprise of the talk that I gave at the Workshop on Modeling and Specification of I/O at the IEEE Symposium on Parallel and Distributed Processing in October, 1995.
Joint work with Alex Colvin, Melissa Hirschl, and Anna Poplawski.