in Proceedings of USENIX Annual Technical Conference (USENIX 2002), pp.
161-175, 10-15 June 2002, Monterey, CA. Supercedes CMU SCS Tech. Report
CMU-CS-02-186, which supercedes CMU-CS-00-157, originally published in
Theodore M. Wong, John Wilkes*
School of Computer Science
Carnegie Mellon University
Pittsburgh, PA 15213
*Hewlett-Packard Laboratories, Palo Alto, CA.
Modern high-end disk arrays often have several giga-bytes of cache RAM. Unfortunately, most array caches use management policies which duplicate the same data blocks at both the client and array levels of the cache hierarchy: they are inclusive. Thus, the aggregate cache behaves as if it was only as big as the larger of the client and array caches, instead of as large as the sum of the two. Inclusiveness is wasteful: cache RAM is expensive.
We explore the benefits of a simple scheme to achieve exclusive caching, in which a data block is cached at either a client or the disk array, but not both. Exclusiveness helps to create the effect of a single, large unified cache. We introduce a DEMOTE operation to transfer data ejected from the client to the array, and explore its effectiveness with simulation studies. We quantify the benefits and overheads of demotions across both synthetic and real-life workloads. The results show that we can obtain useful, sometimes substantial, speedups.
During our investigation, we also developed some new cache-insertion algorithms that show promise for multi-client systems, and report on some of their properties.
KEYWORDS: hierarchical storage cache, exclusive caching