PARALLEL DATA LAB 

PDL Abstract

BEST PAPER AT OSDI'21!
Pollux: Co-adaptive Cluster Scheduling for Goodput-Optimized
Deep Learning

15th USENIX Symposium on Operating Systems Design and Implementation, Virtual Event, July 14–16, 2021.

Aurick Qiao1,2, Sang Keun Choe2, Suhas Jayaram Subramanya2, Willie Neiswanger2, Qirong Ho1, Hao Zhang1,2, Gregory R. Ganger2, Eric P. Xing3,1,2

1 Petuum, Inc.
2 Carnegie Mellon University
3 MBZUAI

http://www.pdl.cmu.edu/

Pollux improves scheduling performance in deep learning (DL) clusters by adaptively co-optimizing inter-dependent factors both at the per-job level and at the cluster-wide level. Most existing schedulers expect users to specify the number of resources for each job, often leading to inefficient resource use. Some recent schedulers choose job resources for users, but do so without awareness of how DL training can be re-optimized to better utilize the provided resources.

Pollux simultaneously considers both aspects. By monitoring the status of each job during training, Pollux models how their goodput (a metric we introduce to combine system throughput with statistical efficiency) would change by adding or removing resources. Pollux dynamically (re-)assigns resources to improve cluster-wide goodput, while respecting fairness and continually optimizing each DL job to better utilize those resources.

In experiments with real DL jobs and with trace-driven simulations, Pollux reduces average job completion times by 37–50% relative to state-of-the-art DL schedulers, even when they are provided with ideal resource and training configurations for every job. Pollux promotes fairness among DL jobs competing for resources, based on a more meaningful measure of useful job progress, and reveals a new opportunity for reducing DL cost in cloud environments. Pollux is implemented and publicly available as part of an open-source project at https://github.com/petuum/adaptdl.

FULL PAPER: pdf