SDI Seminar

Speaker: Sandhya Dwarkadas, University of Rochester

Date: January 29, 1998

Transparent Shared Memory on Clusters of SMPs Using Remote-Write Networks


Clusters of symmetric multiprocessors (SMPs) are fast becoming a highly available platform for parallel computing. There is a need for a uniform programming paradigm that allows users to extend parallelism across multiple SMP nodes. A shared memory paradigm provides ease-of-use while leveraging the available hardware coherence to handle sharing within an SMP. The challenge is to ensure that software overhead is incurred ONLY when actively sharing data across SMPs in the cluster.

I will describe a "two-level" software distributed shared memory system (DSM) --- Cashmere-2L --- that meets this challenge. Cashmere-2L uses hardware to share memory within an SMP, and carefully reduces interference with normal execution due to software coherence maintenance across SMPs, by allowing a high level of asynchrony in the protocol.

Low-latency high-bandwidth remote-write networks, such as DEC's Memory Channel, provide the possibility of transparent, inexpensive access to the memory on remote SMPs. These networks suggest the need to re-evaluate the assumptions underlying the design of DSM protocols, and specifically, to consider protocols that communicate consistency information at a much finer grain. I will discuss the design trade-offs in the context of Cashmere-2L. Experimental results to evaluate the effects on performance of our design decisions will be presented.