PDL ABSTRACT

Dynamic Stem-Sharing for Multi-Tenant Video Processing

SysML 18, February 15–16, 2018. Stanford, CA.

Angela Jiang, Christopher Canel, Daniel Wong, Michael Kaminsky*, Michael A. Kozuch*, Padmanabhan Pillai*, David G. Andersen, Gregory R. Ganger

Carnegie Mellon University
Pittsburgh, PA

*Intel Labs

http://www.pdl.cmu.edu/

Video cameras are ubiquitous, and their outputs are increasingly analyzed by sophisticated, online DNN inference-based applications. The ever-growing capabilities of video and image analysis techniques create new possibilities for what may be gleaned from any given video stream. Consequently, most raw video streams will be processed by multiple analysis pipelines. For example, a parking lot camera might be used by three different applications: reporting open parking spots, tracking each car’s parking duration for billing, and recording any fender benders.

In this paper, we focus on shared processing on edge devices; processing video near the camera addresses issues such as bandwidth, intermittent connectivity (e.g., in drones), and real-time requirements, but leads to resource limitations. Thus, optimal video application performance requires tuning to the resources available [13, 2, 14, 4, 7]. However, application developers may be unable to predict easily what resources will be available when the application is deployed, particularly in “multi-tenant” environments where the set of concurrently deployed applications may vary. Instead, individual application developers typically develop their models in isolation, assuming either infinite resources or a predetermined set of static resources. When a number of such individually-tailored models are run concurrently, resource competition forces the video stream to be analyzed at a lower frame rate— leading to unsatisfactory results for the running applications, as frames are dropped and events in those frames are missed.

The Mainstream video processing system enables efficient execution of multiple independently-developed and incrementallydeployed video analysis applications on a given video stream. Mainstream shares execution of concurrent DNNs, yet does not rely on applications’ DNNs to be trained collectively. Therefore, Mainstream provides collaborative execution, even when development and training data are not centralized in one organization.

FULL PAPER: pdf

 


 

 

 

© 2019. Legal Info.
Last updated 17 September, 2018