Zoned Storage for Future Data Centers

Zoned storage devices are a paradigm shift in the way we view storage today for both hard disk and solid state drives. A zoned storage device divides the LBA space into zones (i.e., large and contiguous address ranges) that can only be written sequentially. This breaks long-standing assumptions associated with the traditional block interface that has been the undisputed storage interface since the early days of UNIX. Zoned storage devices are governed by a “zone interface”---Zoned Namespaces (ZNS) in NVMe SSDs and ZBC/ZAC in Shingled Magnetic Recording HDDs---a new interface defining a novel division of responsibilities between storage software and device firmware. We are exploring methods for adapting and redesigning storage software, such as key-value stores, file systems, and distributed storage systems, to make them compatible with zoned devices with the least amount of effort.

Despite significant technological differences, emerging data center solid state drives (SSDs) and hard disk drives (HDDs) both fit the zone interface. Through ZNS, SSDs can significantly narrow the scope of the Flash Translation Layer (FTL), which has been the bane of SSDs’ existence by introducing performance unpredictability in the form of long tail latencies and significant write amplification. ZNS helps eliminate page-based remapping and garbage collection from the FTL by matching the aforementioned zones to groups of underlying NAND erase blocks. By virtually eliminating garbage collection, this greatly reduces write amplification and the need for capacity overprovisioning, which in turn reduces device cost. On the other hand, HDDs now boost their capacity using Shingled Magnetic Recording (SMR), which divides the drive surface into groups of tracks that must be written sequentially. ZBC/ZAC exposes these groups of tracks to the host as individual zones, avoiding the need for a translation layer similar to SSDs’ FTL. Because of the general applicability of the zone interface, we believe that many fundamental aspects of software for zoned storage will work across both types of devices.




George Amvrosiadis
Greg Ganger
Garth Gibson


Abutalib Aghayev
Saurabh Kadekodi


Western Digital


  • File Systems Unfit as Distributed Storage Backends: Lessons from 10 Years of Ceph Evolution. Abutalib Aghayev, Sage Weil, Michael Kuchnik, Mark Nelson, Gregory R. Ganger, George Amvrosiadis. SOSP ’19, October 27–30, 2019, Huntsville, ON, Canada.
    Abstract / PDF [870K]

  • Reconciling LSM-Trees with Modern Hard Drives using BlueFS. Abutalib Aghayev, Sage Weil, Greg Ganger, George Amvrosiadis. Carnegie Mellon University Parallel Data Lab Technical Report CMU-PDL-19-102, April 2019.
    Abstract / PDF [735K]

  • Evolving Ext4 for Shingled Disks. Abutalib Aghayev, Theodore Ts’o, Garth Gibson, Peter Desnoyers. 15th USENIX Conference on File and Storage Technologies (FAST '17), Feb 27–Mar 2, 2017. Santa Clara, CA.
    Abstract / PDF [1.4M]

  • Caveat-Scriptor: Write Anywhere Shingled Disks. Saurabh Kadekodi, Swapnil Pimpale, Garth Gibson. Proc. Of the Seventh USENIX Workshop on Hot Topics in Storage and File Systems (HotStorage’15), Santa Clara, CA, July 2015. Expanded paper available: Carnegie Mellon University Parallel Data Lab Technical Report CMU-PDL-15-101.
    Abstract / PDF [3.4M]

  • Shingled Magnetic Recording: Areal Density Increase Requires New Data Management. Tim Feldman, Garth Gibson. USENIX ;login:, v 38, n 3, June 2013.
    Abstract / PDF [1.17M]

  • Shingled Magnetic Recording for Big Data Applications. Anand Suresh, Garth Gibson, Greg Ganger. Carnegie Mellon University Parallel Data Lab Technical Report CMU-PDL-12-105. May 2012.
    Abstract / PDF [561K]

  • Principles of Operation for Shingled Disk Devices. Garth Gibson, Greg Ganger. Carnegie Mellon University Parallel Data Lab Technical Report CMU-PDL-11-107. April 2011.
    Abstract / PDF [500K]

  • Directions for Shingled-Write and Two-Dimensional Magnetic Recording System Architectures: Synergies with Solid-State Disks. Garth Gibson, Milo Polte. Carnegie Mellon University Parallel Data Lab Technical Report CMU-PDL-09-104. May 2009.
    Abstract / PDF [70K]


We thank the members and companies of the PDL Consortium: Amazon, Google, Hitachi Ltd., Honda, Intel Corporation, IBM, Meta, Microsoft Research, Oracle Corporation, Pure Storage, Salesforce, Samsung Semiconductor Inc., Two Sigma, and Western Digital for their interest, insights, feedback, and support.