PARALLEL DATA LAB 

PDL Abstract

Machine Learning on Volatile Instances

IEEE Intl. Conf. on Computer Communications (INFOCOM). Virtual Toronto, Canada, July 6-9, 2020.

Xiaoxi Zhang, Jianyu Wang, Gauri Joshi, Carlee Joe-Wong

Carnegie Mellon University

http://www.pdl.cmu.edu/

Abstract—Due to the massive size of the neural network models and training datasets used in machine learning today, it is imperative to distribute stochastic gradient descent (SGD) by splitting up tasks such as gradient evaluation across multiple worker nodes. However, running distributed SGD can be prohibitively expensive because it may require specialized computing resources such as GPUs for extended periods of time. We propose cost-effective strategies that exploit volatile cloud instances that are cheaper than standard instances, but may be interrupted by higher priority workloads. To the best of our knowledge, this work is the first to quantify how variations in the number of active worker nodes (as a result of preemption) affects SGD convergence and the time to train the model. By understanding these tradeoffs between preemption probability of the instances, accuracy, and training time, we are able to derive practical strategies for configuring distributed SGD jobs on volatile instances such as Amazon EC2 spot instances and other preemptible cloud instances. Experimental results show that our strategies achieve good training performance at substantially lower cost.

FULL PAPER: pdf