High-performance Low-cost NVM Storage Redundancy

Contact: Rajat Kateja, Greg Ganger

Non-volatile memory (NVM) changes the way performance-sensitive applications interact with persistent data. NVM storage combines DRAM-like access latencies and granularities with disk-like durability. DIMM form-factor NVM storage resides on the memory bus with load/store accessible data that is read/written from/to CPU caches at a cache-line granularity. Applications can access NVM storage via conventional I/O interfaces or via direct-access (DAX). Conventional I/O interfaces, such as the file system, allow applications to use NVM without any modification. With DAX, applications map NVM data into their address space and access it directly with load and store instructions without interposed system software overheads.

Production storage demands more than just non-volatility and performance. A number of features that bolster data integrity are also expected. Whereas some features, e.g., background scrubbing, extend to NVM storage trivially, conventional redundancy mechanisms like page checksums and cross-page/cross-device parity fit poorly. Production storage systems maintain end-to-end page checksums, over and above on-device error correcting codes (ECCs), to protect data from loss/corruption due to firmware bugs such as lost writes and misdirected reads/writes. They also maintain cross-page or cross-device parity to recover data in case of a loss or corruption.

Maintaining page checksums and cross-page parity efficiently is particularly challenging with the DAX interface. The lack of interposed system software makes it challenging to identify data reads and updates that should trigger redundancy verification and updates, respectively. The fine-grained access granularity (e.g., 64-byte cache-lines) is incongruent with the typically large page size over which checksums are maintained (e.g., 4K pages) for space efficiency; this incongruence magnifies the overhead of checksum updates and verification.

We propose a lazy checksum and parity maintenance scheme, ANON, that creates a trade-off between time to coverage and performance. ANON’s periodic background thread updates redundancy for pages that an application wrote to in the last period. ANON repurposes page table entry dirty bits to identify such pages with stale redundancy. By delaying the redundancy updates, ANON amortizes the overhead of updating checksums and parity.

For applications that demand stronger data-integrity guarantees than ANON provides, such as in-line redundancy maintenance, we propose a hardware controller, Tvarak, interposed in the data path. Tvarak resides with the last-level cache controllers and provides efficient in-line redundancy maintenance. Tvarak leverages techniques like checksum and parity caching, and storing data-diffs (used for incremental checksum and parity updates) in the last level cache to reduce the impact of redundancy maintenance on NVM bandwidth. Tvarak has low dedicated resource requirements (e.g., a small on-Tvarak cache), and benefits from configurable fractions of shared system resources (e.g., last level cache).

Our ongoing work aims to reduce the cost of redundant NVM storage by storing parity on SSDs. We envision logging the parity writes in a small NVM region to avoid impacting application write performance and streaming this log in large sequential chunks to a SSD in the background.



Greg Ganger
Andy Pavlo
Nathan Beckmann


Rajat Kateja




We thank the members and companies of the PDL Consortium: Alibaba Group, Amazon, Datrium, Facebook, Google, Hewlett Packard Enterprise, Hitachi Ltd., Intel Corporation, IBM, Micron, Microsoft Research, NetApp, Inc., Oracle Corporation, Salesforce, Samsung Semiconductor Inc., Seagate Technology, and Two Sigma for their interest, insights, feedback, and support.




© 2019. Legal Info.
Last updated 10 September, 2019