PARALLEL DATA LAB 

PDL Abstract

MATCHA: Speeding Up Decentralized SGD via Matching Decomposition Sampling

NeurIPS workshop of Federated Learning for Data Privacy and Confidentiality, Dec 13, 2019. Vancouver, BC, Canada.
Distinguished Student Paper Award.

Jianyu Wang†, Anit Sahu‡, Zhouyi Yang†, Gauri Joshi†, Soummya Kar†

†Carnegie Mellon University
‡Bosch Center for Artificial Intelligence

http://www.pdl.cmu.edu/

This paper studies the problem of error-runtime trade-off, typically encountered in decentralized training based on stochastic gradient descent (SGD) using a given network. While a denser (sparser) network topology results in faster (slower) error convergence in terms of iterations, it incurs more (less) communication time/delay per iteration. In this paper, we propose MATCHA, an algorithm that can achieve a win-win in this error-runtime trade-off for any arbitrary network topology. The main idea of MATCHA is to parallelize inter-node communication by decomposing the topology into matchings. To preserve fast error convergence speed, it identifies and communicates more frequently over critical links, and saves communication time by using other links less frequently. Experiments on a suite of datasets and deep neural networks validate the theoretical analyses and demonstrate that MATCHA takes up to 5× less time than vanilla decentralized SGD to reach the same training loss.

FULL REPORT: pdf