PARALLEL DATA LAB 

PDL Abstract

Cavs: An Efficient Runtime System for Dynamic Neural Networks

2018 USENIX Annual Technical Conference (USENIX ATC ’18). July 11–13, 2018 • Boston, MA..

Shizhen Xu*^, Hao Zhang*†, Graham Neubig*†, Wei Dai*†, Jin Kyu Kim*, Zhijie Deng‡, Qirong Ho†, Guangwen Yang‡, Eric P. Xing

* Carnegie Mellon University
^ Tsinghua University
† Petuum, Inc.
‡ Tsinghua University

http://www.pdl.cmu.edu/

Recent deep learning (DL) models are moving more and more to dynamic neural network (NN) architectures, where the NN structure changes for every data sample. However, existing DL programming models are inefficient in handling dynamic network architectures because of: (1) substantial overhead caused by repeating dataflow graph construction and processing every example; (2) difficulties in batched execution of multiple samples; (3) inability to incorporate graph optimization techniques such as those used in static graphs. In this paper, we present “Cavs”, a runtime system that overcomes these bottlenecks and achieves efficient training and inference of dynamic NNs. Cavs represents a dynamic NN as a static vertex function F and a dynamic instance-specific graph G. It avoids the overhead of repeated graph construction by only declaring and constructing F once, and allows for the use of static graph optimization techniques on pre-defined operations in F. Cavs performs training and inference by scheduling the execution of F following the dependencies in G, hence naturally exposing batched execution opportunities over different samples. Experiments comparing Cavs to state-of-the-art frameworks for dynamic NNs (TensorFlow Fold, PyTorch and DyNet) demonstrate the efficacy of our approach: Cavs achieves a near one order of magnitude speedup on training of dynamic NN architectures, and ablations verify the effectiveness of our proposed design and optimizations.

FULL PAPER: pdf