PARALLEL DATA LAB 

PDL Abstract

Rateless Codes for Near-Perfect Load Balancing in Distributed Matrix-Vector Multiplication

arXiv:1804.10331v2 [cs.DC] 30 Apr 2018.

Ankur Mallick, Malhar Chaudhari, Gauri Joshi

Carnegie Mellon University

http://www.pdl.cmu.edu/

Large-scale machine learning and data mining applications require computer systems to perform massive computations that need to be parallelized across multiple nodes, for example, massive matrix-vector and matrix-matrix multiplication. The presence of straggling nodes – computing nodes that unpredictably slowdown or fail – is a major bottleneck in such distributed computations. We propose a rateless fountain coding strategy to alleviate the problem of stragglers in distributed matrix-vector multiplication. Our algorithm creates a stream of linear combinations of the m rows of the matrix, and assigns them to different worker nodes, which then perform row-vector products with the encoded rows. The original matrix-vector product can be decoded as soon as slightly more than m row-vector products are collectively finished by the nodes. This strategy enables fast nodes to steal work from slow nodes, without requiring the master to perform any dynamic loadbalancing. Compared to recently proposed fixed-rate erasure coding strategies which ignore partial work done by straggling nodes, rateless coding achieves significantly lower overall delay, as well as small computational and decoding overhead.

FULL TR: pdf