PARALLEL DATA LAB 

PDL Abstract

PipeDream: Generalized Pipeline Parallelism for DNN Training

SOSP ’19, October 27–30, 2019, Huntsville, ON, Canada.

Deepak Narayanan‡*, Aaron Harlap†*, Amar Phanishayee*, Vivek Seshadri*, Nikhil R. Devanur*, Gregory R. Ganger†, Phillip B. Gibbons†, Matei Zaharia‡

* Microsoft Research
† Carnegie Mellon University
‡ Stanford University

http://www.pdl.cmu.edu/

DNN training is extremely time-consuming, necessitating efficient multi-accelerator parallelization. Current approaches to parallelizing training primarily use intra-batch parallelization, where a single iteration of training is split over the available workers, but suffer from diminishing returns at higher worker counts. We present PipeDream, a system that adds inter-batch pipelining to intra-batch parallelism to further improve parallel training throughput, helping to better overlap computation with communication and reduce the amount of communication when possible. Unlike traditional pipelining, DNN training is bi-directional, where a forward pass through the computation graph is followed by a backward pass that uses state and intermediate data computed during the forward pass. Naïve pipelining can thus result in mismatches in state versions used in the forward and backward passes, or excessive pipeline flushes and lower hardware efficiency. To address these challenges, PipeDream versions model parameters for numerically correct gradient computations, and schedules forward and backward passes of different minibatches concurrently on different workers with minimal pipeline stalls. PipeDream also automatically partitions DNN layers among workers to balance work and minimize communication. Extensive experimentation with a range of DNN tasks, models, and hardware configurations shows that PipeDream trains models to high accuracy up to 5.3× faster than commonly used intra-batch parallelism techniques.

FULL PAPER: pdf