PDL CONSORTIUM SPEAKER SERIES

A ONE-AFTERNOON SERIES OF SPECIAL SDI TALKS BY
PDL CONSORTIUM VISITORS

DATE: Thursday, May 10, 2012
TIME: 12:00 pm to 5:00 pm
PLACE: CIC 2101



SPEAKERS:
12:00 - 1:30 pm Brian Hirano, Oracle
Nisha Talagala, Fusion-io
1:45 - 3:15 pm James Nunez, Los Alamos National Lab
Wojciech Golab, HP Labs
3:30 - 5:00 pm Tom Ambrose, Emulex
Jiri Schindler, NetApp



SPEAKER: Brian Hirano, Corporate Architecture, Oracle
Advocating Efficient Scheduling Support in Hardware for Software
Recent improvements in storage and memory technologies, interconnect technologies, and systems integration have brought into question traditional assumptions of how hardware systems, operating systems and applications interact. This talk examines one of these areas: whether there is a need for processor architecture changes to allow for fine-grain control of scheduling, what this interface may look like, and where this construct may be useful. Though this is not a new processor architecture idea, this topic is interesting to multiple Oracle development group and expounds upon what may be ramifications for both hardware and software. [SLIDES - PDF]

BIO: After twenty-four years, Oracle is still my first job after studying applied mathematics at MIT. I have spent most of my Oracle life working in Oracle RDBMS development working on the Virtual OS team. I've owned many components of the Oracle RDBMS, including locks (latches), memory management, and exception handling and designed or help to design various features up and down the Oracle RDBMS stack. While in RDBMS development, I also worked with hardware vendors and OS vendors in the areas of Oracle performance and specifying future hardware and OS features beneficial to Oracle features and performance. I am currently deciding what to do next at Oracle, but I remain involved in design discussions on hardware and software interactions with developers from the Oracle database product teams, Java development, and Linux development.

^TOP


SPEAKER: Nisha Talagala, Fusion-io
Implications of Non Volatile Memory on Software Architectures
Flash based non volatile memory is revolutionizing data center architectures, improving application performance by bridging the gap between DRAM and disk. Future non volatile memories promise performance even closer to DRAM. While flash adoption in industry started as disk replacement, the past several years have seen data center architectures change to take advantage of flash as a new memory tier in both servers and storage.

This talk covers the implications of nonvolatile memory on software. We describe the stresses that non volatile memory places on existing application and OS designs, and illustrate optimizations to exploit flash as a new memory tier. Until the introduction of flash, there has been no compelling reason to change the existing operating system storage stack. We will describe the technologies contained in the upcoming Fusion-io Software Developer Kit (ioMemory SDK) that allow applications to leverage the native capabilities of non-volatile memory as both an I/O device and a memory device. The technologies described will include new I/O based APIs and libraries to leverage the ioMemory Virtual Storage Layer, as well as features for extending DRAM into flash for cost and power reduction. Finally, we describe Auto-Commit-Memory, a new persistent memory type that will allow applications to combine the benefits of persistence with programming semantics and performance levels normally associated with DRAM. [SLIDES - PDF]

BIO: Nisha Talagala is Lead Architect at Fusion-io, where she works on innovation in non volatile memory technologies and applications. Nisha has more than 10 years of expertise in software development, distributed systems, storage, I/O solutions, and non-volatile memory. She has worked as technology lead for server flash at Intel - where she led server platform non volatile memory technology development and partnerships. Prior to Intel, Nisha was the CTO of Gear6, where she developed clustered computing caches for high performance I/O environments. Nisha also served at Sun Microsystems, where she developed storage and I/O solutions and worked on file systems. Nisha earned her PhD at UC Berkeley where she did research on clusters and distributed storage. Nisha hold more than 30 patents in distributed systems, networking, storage, performance and non-volatile memory.

^TOP


SPEAKER: James Nunez, Los Alamos National Lab
Introducing The Multi-Dimensional Hashed Indexed Metadata Middleware System
The Multi-Dimensional Hashed Indexed Metadata/Middleware (MDHIM) System is a research prototype infrastructure capable of managing massive amounts of index information representing even larger amounts of scientific data to enable data exploration at enormous scale. MDHIM is a scalable parallel multi-dimensional key/value store intended for use on and exploits aspects of the world's largest supercomputers. The MDHIM's architecture and how it leverages existing data stores will be explored during this talk.
[SLIDES - PDF]

BIO: James is currently a member of the Ultrascale Systems Research Center at Los Alamos National Lab exploring I/O, file systems and storage for exascale systems and beyond. As part of the High Performance Systems Intergration Group, James has been involved with the ASC File System Path Forward effort to create a high scalable global parallel file system based on secure object device technology, organized the High End Computing File Systems and I/O Workshop for seferal years, and concerned with evaluating and benchmarking HPC file systems, I/O middleware solutions, and understanding the I/O interface and application of new storage innovations to real science applications. James was awarded a Master of Science in Computer Science in 1998 and a Master of Science in Mathematics in 1996 from the University of Michigan, Ann Arbor, and a Bachelor of Science in Mathematics in 1994 from the University of California, Irvine.

^TOP


SPEAKER: Wojciech Golab, HP Labs
Minuet: A Scalable Distributed Multiversion B-Tree
Data management systems have traditionally been designed to support either long-running analytics queries or short-lived transactions, but an increasing number of applications need both. For example, online gaming, socio-mobile apps, and e-commerce sites need to not only maintain operational state, but also analyze that data quickly to make predictions and recommendations that improve user experience. In this paper we present Minuet, a distributed, main-memory B-tree that supports both transactions and copy-on-write snapshots for in-situ analytics. Minuet utilizes main memory storage to enable low-latency transactional operations as well as analytics queries without compromising transaction performance. In addition to supporting read-only analytics queries on snapshots, Minuet supports writable clones, so that users can create branching versions of the data. This feature can be used to support complex "what-if" queries as well as to facilitate wide-area replication and sharing. Our experiments show that Minuet scales to hundreds of cores and Terabytes of memory, outperforms in several ways a modern industrial main-memory transactional system, and processes hundreds of thousands of B-tree operations concurrently with long running scans. [SLIDES - PDF]

BIO:

^TOP


SPEAKER: Tom Ambrose, Emulex
High Performance IO - Can today's applications really take advantage it?
As IO adaptors support faster host (PCIe gen 3) and link speeds (10G/40G/100G Ethernet, 8G/16G Fibre Channel), lower latencies, and more offloads, the entire system needs to scale to make use of these increases. Operating systems, hypervisors, applications, file systems, etc. can all see increased performance with today's and future network/storage adaptors. [SLIDES - PDF]

BIO: Tom Ambrose is the Sr. Director of Engineering, Systems/Technology/Architecture at Emulex Corporation. He has a B.S. ECE from Carnegie Mellon and 15+ years in ASIC design/management. He has been working for 5 years in his current Architecture role.

^TOP


SPEAKER: Jiri Schindler, NetApp
From Server-side to Host-side: Flash memory for enterprise storage
Flash memory has, arguably, revolutionalized data storage industry. When used and managed appropriately, it can provide most of I/O for enterprise applications with sub-millisecond access times for a relatively modest increase in system costs. Initially, it was deployed in the form of solid-state disks (SSDs) as a drop-in replacement for disk drives. To overcome the IOPS bottlneck of the shared storage backend, systems started using PCI-attached cards with Flash memory chips. Most recently, the PCI-attached SSD designs have been appearing at the host. Many predict future HW architectures with flash or other storage-class memory embedded in the chipset.

This talk will reflect on the evolution of flash memory in enterprise storage systems from the prospective of an enterprise storage vendor. It will outline the design rationale behind alread-released products with SSDs and PCI-attached I/O accelerators in a centrally managed dedicated storage system. Most of it will focus on exploratory work in combining host-side (client) Flash deployments with the benefits of centrally-managed storage system. [SLIDES - PDF]

BIO: Jiri Schindler is a member of technical staff in the NetApp's Advanced Technology Group. For the last three years he has been working on new storage systems architectures that combine flash memory and hard disk drives (HDD): his previous work proposed the use of flash memory to achieve suitable data allocation strategy for efficient execution of workloads dominated by small logical updates followed by large serial reads. Jiri is a Carnegie Mellon alum (he was the first PhD student of Prof. Greg Ganger).

^TOP


SDI / ISTC SEMINAR QUESTIONS?
Karen Lindenfelser, 86716, or visit www.pdl.cmu.edu/SDI/