PARALLEL DATA LAB 

PDL Abstract

Saving Cash by Using Less Cache

4th USENIX Workshop of Hot Topics in Cloud Computing (Hotcloud 2012). June 12-13, 2012, Boston, MA.

Timothy Zhu, Anshul Gandhi, Mor Harchol-Balter, Michael A. Kozuch*

Electrical and Computer Engineering
Carnegie Mellon University
Pittsburgh, PA 15213

* Intel Labs

http://www.pdl.cmu.edu/

Everyone loves a large caching tier in their multitier cloud-based web service because it both alleviates database load and provides lower request latencies. Even when load drops severely, administrators are reluctant to scale down their caching tier. This paper makes the case that (i) scaling down the caching tier is viable with respect to performance, and (ii) the savings are potentially huge; e.g., a 4x drop in load can result in 90% savings in the caching tier size.

FULL PAPER: pdf