PARALLEL DATA LAB 

PDL Abstract

MATCHA: A Matching-Based Link Scheduling Strategy to Speed up Distributed Optimization

IEEE Transactions on Signal Processing, Oct 2022.

Jianyu Wang, Anit Sahu*, Gauri Joshi

Carnegie Mellon University
*Amazon Alexa AI

http://www.pdl.cmu.edu/

In this paper, we study the problem of distributed optimization using an arbitrary network of lightweight computing nodes, where each node can only send/receive information to/from its direct neighbors. Decentralized stochastic gradient descent (SGD) has been shown to be an effective method to train machine learning models in this setting. Although decentralized SGD has been extensively studied, most prior works focus on the error-versus-iterations convergence, without taking into account how the topology affects the communication delay per iteration. For example, a denser (sparser) network topology results in faster (slower) error convergence in terms of iterations, but it incursmore (less) communication time per iteration.We proposeMATCHA, an algorithm that can achieve awin-win in this error-runtime trade-off for any arbitrary network topology. The main idea of MATCHA is to communicate more frequently over connectivity-critical links in order to ensure fast convergence, and at the same time minimize the communication delay per iteration by using other links less frequently. It strikes this balance by decomposing the topology into matchings and then optimizing the set of matchings that are activated in each iteration. Experiments on a suite of datasets and deep neural networks validate the theoretical analyses and demonstrate that MATCHA takes up to 5x less time than vanilla decentralized SGD to reach the same training loss. The idea of MATCHA can be applied to any decentralized algorithm that involves a communication step with neighbors in a graph.

FULL TR: pdf