Automating Dependence-Aware Parallelization of Machine Learning Training on Distributed Shared MemoryEuroSys '19: Proceedings of the Fourteenth EuroSys Conference, March 2019, Dresden, Germany.
Jinliang Wei*, Garth A. Gibson†*‡, Phillip B. Gibbons*, Eric P. Xing§*
* Carnegie Mellon University
† Vector Institute
‡ University of Toronto
§ Petuum Inc.
Machine learning (ML) training is commonly parallelized using data parallelism. A fundamental limitation of data parallelism is that conicting (concurrent) parameter accesses during ML training usually diminishes or even negates the benets provided by additional parallel compute resources. Although it is possible to avoid conicting parameter accesses by carefully scheduling the computation, existing systems rely on programmer manual parallelization and it remains a question when such parallelization is possible.
We present Orion, a system that automatically parallelizes serial imperative ML programs on distributed shared memory. The core of Orion is a static dependence analysis mechanism that determines when dependence-preserving parallelization is eective and maps a loop computation to an optimized distributed computation schedule. Our evaluation shows that for a number of ML applications, Orion can parallelize a serial program while preserving critical dependences and thus achieve a signicantly faster convergence rate than data-parallel programs and a matching convergence rate and comparable computation throughput to state-of-the-art manual parallelizations including model-parallel programs.
FULL TR: pdf