PDL ABSTRACT

The Parallel Persistent Memory Model

SPAA ’18, July 16–18, 2018, Vienna, Austria.

Guy E. Blelloch*, Phillip B. Gibbons*, Yan Gu*, Charles McGuffey8, Julian Shun†

* Carnegie Mellon University
† MIT CSAIL

http://www.pdl.cmu.edu/

We consider a parallel computational model, the Parallel Persistent Memory model, comprised of P processors, each with a fast local ephemeral memory of limited size, and sharing a large persistent memory. The model allows for each processor to fault at any time (with bounded probability), and possibly restart. When a processor faults, all of its state and local ephemeral memory is lost, but the persistent memory remains. This model is motivated by upcoming non-volatile memories that are nearly as fast as existing random access memory, are accessible at the granularity of cache lines, and have the capability of surviving power outages. It is further motivated by the observation that in large parallel systems, failure of processors and their caches is not unusual.

We present several results for the model, using an approach that breaks a computation into capsules, each of which can be safely run multiple times. For the single-processor version we describe how to simulate any program in the RAM, the external memory model, or the ideal cache model with an expected constant factor overhead. For the multiprocessor version we describe how to efficiently implement a work-stealing scheduler within the model such that it handles both soft faults, with a processor restarting, and hard faults, with a processor permanently failing. For any multithreaded fork-join computation that is race free, write-after-read conflict free and hasW work, D depth, and C maximum capsule work in the absence of faults, the scheduler guarantees a time bound on the model of O(W/PA + DP/PA [log1/(Cf)*W]) in expectation, where P is the maximum number of processors, PA is the average number, and f ≤ 1/(2C) is the probability a processor faults between successive persistent memory accesses. Within the model, and using the proposed methods, we develop efficient algorithms for parallel prefix sums, merging, sorting, and matrix multiply.

FULL PAPER: pdf

 

 

 

© 2019. Legal Info.
Last updated 15 October, 2018