PARALLEL DATA LAB 

PDL Abstract

Exploiting Bounded Staleness to Speed Up Big Data Analytics

Carnegie Mellon University Parallel Data Lab Technical Report CMU-PDL-14-101. February 2014.

Henggang Cui, James Cipar, Qirong Ho, Jin Kyu Kim, Seunghak Lee, Abhimanu Kumar,
Gregory R. Ganger, Phillip B. Gibbons*, Garth A. Gibson, Eric Xing

Carnegie Mellon University

*Intel Labs

http://www.pdl.cmu.edu/

Many modern machine learning (ML) algorithms are iterative, converging on a nal solution via many iterations over the input data. This paper explores approaches to exploiting these algorithms' convergent nature to improve performance, by allowing parallel and distributed threads to use loose consistency models for shared algorithm state. Speci cally, we focus on bounded staleness, in which each thread can see a view of the current intermediate solution that may be a limited number of iterations out-of-date. Allowing staleness reduces communication costs (batched updates and cached reads) and synchronization (less waiting for locks or straggling threads). One approach is to increase the number of iterations between barriers in the oft-used Bulk Synchronous Parallel (BSP) model of parallelizing, which mitigates these costs when all threads proceed at the same speed. A more exible approach, called Stale Synchronous Parallel (SSP), avoids barriers and allows threads to be a bounded number of iterations ahead of the current slowest thread. Extensive experiments with ML algorithms for topic modeling, collaborative ltering, and PageRank show that both approaches signi cantly increase convergence speeds, behaving similarly when there are no stragglers, but SSP outperforms BSP in the presence of stragglers.

KEYWORDS: Big data infrastructure

FULL TR: pdf