PARALLEL DATA LAB 

PDL Abstract

Caching with Delayed Hits

SIGCOMM ’20, August 10–14, 2020, Virtual Event, NY, USA.

Nirav Atre, Justine Sherry, Weina Wang, Daniel S. Berger*

Carnegie Mellon University
*Microsoft Research

Caches are at the heart of latency-sensitive systems. In this paper, we identify a growing challenge for the design of latency-minimizing caches called delayed hits. Delayed hits occur at high throughput, when multiple requests to the same object queue up before an outstanding cache miss is resolved. This effect increases latencies beyond the predictions of traditional caching models and simulations; in fact, caching algorithms are designed as if delayed hits simply didn’t exist.We show that traditional caching strategies – even so called ‘optimal’ algorithms –can fail to minimize latency in the presence of delayed hits. We design a new, latency-optimal offline caching algorithm called belatedly which reduces average latencies by up to 45% compared to the traditional, hit-rate optimal Belady’s algorithm. Using belatedly as our guide, we show that incorporating an object’s ‘aggregate delay’ into online caching heuristics can improve latencies for practical caching systems by up to 40%.We implement a prototype, Minimum-AggregateDelay (mad), within a CDN caching node. Using a CDN production trace and backends deployed in different geographic locations, we show that mad can reduce latencies by 12-18% depending on the backend RTTs.

FULL PAPER: pdf / talk video