Seminars

PDL CONSORTIUM SPEAKER SERIES

A ONE-AFTERNOON SERIES OF SPECIAL SDI TALKS BY
PDL CONSORTIUM VISITORS

DATE: Thursday, May 9, 2013
TIME: 12:00 pm to 3:30 pm
PLACE: CIC 2101


SPEAKERS:
12:00 - 12:45 pm Gideon Mann, Google
12:45 - 1:30 pm Nisha Talagala, Fusion-io
1:30 - 1:45 pm break
1:45 - 2:30 pm Fanglu Guo, Symantec
2:30 - 3:15 pm Kevin Gomez, Seagate



SPEAKER: Gideon Mann, Google
Profiling Deployed Distributed Systems
[SLIDES - PDF]
Understanding the sources of latency within a deployed distributed system is difficult. Asynchronous control flow, variable workloads, pushes of new backend servers, and unreliable hardware all can make significant contributions to a job's performance. This talk presents recent work that builds a profiling tool for deployed distributed systems. The method uses distributed traces to estimate the code control flow and predict/explain observed performance. The talk will sketch how this method has been applied to understand and tune large distributed systems at Google and how it has been used in a differential profiling fashion to understand the sources of latency changes.

BIO: Gideon is a Staff Research Scientist at Google NY. He attended Brown University as an undergraduate where he hung out in the AI lab and drank too much Mountain Dew. He then attended graduate school at Johns Hopkins University, worked in CLSP, and graduated in 2006 with a Ph.D. He still misses Charm City. He then spent a post-doc at the UMass/Amherst with Andrew McCallum working on weakly-supervised learning. In 2007, he joined Google.

At Google, his team works on applied machine learning. The Weatherman effort leverages statistical methods to data center management. The team also is responsible for an internal machine learning middleware and its external interface, the Prediction API https://developers.google.com/prediction/. Publicly released in 2010, Prediction was an early machine learning as a service offering and remains an evolving platform for new techniques in machine learning.

^TOP


SPEAKER: Nisha Talagala, Fusion-io
Flash Based Caches: Challenges and Opportunities
[SLIDES - PDF]
Flash memory is widely used for its fast random I/O access performance in a gamut of enterprise storage applications. In this talk, we discuss the opportunities that flash presents as a cache for storage systems, and the unique challenges that arise when using flash as a cache. We analyze the added write pressures that cache workloads place on flash devices and propose optimizations at both the cache and flash management layers to improve endurance while maintaining or increasing cache hit rate. We demonstrate the individual and cumulative contributions of cache admission policy, cache eviction policy, flash garbage collection policy, and flash device configuration on a) hit rate, b) overall writes, and c) erases as seen by the solid state cache device. We demonstrate memory efficient admission policies that enable effective caching while reducing the capacity consumption and flash writing caused by low value content entering cache. We also outline techniques for enabling effective write caching while maintaining coherent states in back end storage. Finally, we describe various caching technologies in production today that exploit flash in applications, at the file system level and at the block device level.

BIO: Nisha Talagala is Lead Architect at Fusion-io, where she works on innovation in non volatile memory technologies and applications. Nisha has more than 10 years of expertise in software development, distributed systems, storage and I/O solutions, and non-volatile memory. She has worked as technology lead for server flash at Intel - where she led server platform non volatile memory technology development, storage-memory convergence, and partnerships. Prior to Intel, Nisha was the CTO of Gear6, where she designed and built clustered computing caches for high performance I/O environments. Nisha also served at Sun Microsystems, where she developed storage and I/O solutions and worked on file systems. Nisha earned her PhD at UC Berkeley where she did research on clusters and distributed storage. Nisha holds more than 30 patents in distributed systems, networking, storage, performance and non-volatile memory.

^TOP


SPEAKER: Fanglu Guo, Symantec
Building a High Performance Deduplication System
[SLIDES - PDF]
Modern deduplication has become quite effective at eliminating duplicates in data, thus multiplying the effective capacity of disk-based backup systems, and enabling them as realistic tape replacements. Despite these improvements, single-node raw capacity is still mostly limited to tens or a few hundreds of terabytes, forcing users to resort to complex and costly multi-node systems, which usually only allow them to scale to single digit petabytes. As the opportunities for deduplication efficiency optimizations become scarce, we are challenged with the task of designing deduplication systems that will effectively address the capacity, throughput, management and energy requirements of the petascale age.

In this talk we will present our high-performance deduplication prototype, designed from the ground up to optimize overall single-node performance, by making the best possible use of a node's resources, and achieve three important goals: scale to large capacity, provide good deduplication efficiency, and near-raw-disk throughput. We improve single-node scalability by introducing progressive sampled indexing and grouped mark-and-sweep, and also optimize throughput by utilizing an event-driven, multi-threaded client-server interaction model. Our prototype implementation is able to scale to billions of stored objects, with high throughput, and very little or no degradation of deduplication efficiency.

BIO: Fanglu Guo is a Technical Director at Symantec Research Labs. His experience includes storage, computer security, network security, and computer networks. He has published over twenty academic papers in refereed conferences and journals. He received Best Paper Awards from 2005 Annual Computer Security Applications Conference (ACSAC) and 2011 USENIX Annual Technical Conference. Additionally, he has filed several dozen patent applications. He holds a B.E. in EE from Xian Jiaotong University, M.E. in EE from Chinese Academy of Sciences, and Ph.D. in CS from Stony Brook University.

^TOP


SPEAKER: Kevin Gomez, Seagate
Quantum Tunneling through the Memory Wall - the case for computing in flash in the age of Big Data
[SLIDES - PDF]
This is an exciting time for computer architecture - needing to completely reinvent itself to sustain performance growth since slamming into the power and memory walls. This talk spans device physics, semiconductor roadmaps and trends, market forces, intellectual property secrecy and the emergence of a new computing paradigm enabled by the Open Compute movement - to explain why NAND Flash and not any other non-volatile memory technology may be the ideal point of convergence for storage and compute in the coming decade. The talk would not be complete without transaction-level architectural simulations of performance and energy use.

BIO: Kevin Gomez is a system architect in the SSD System Architecture and Advanced Development team at Seagate MN. He has been with Seagate for 24 years including 8 years at Seagate Research in Pittsburgh where he was Director of Research - Servomechanics as well as a researcher on storage-compute architectures. Kevin has strong ties to CMU collaborating on the Non-Linear Stochastic Processing project with Prof Tom Mitchell and Prof Dave Touretzky and designing the hardware prototype for the First Person Vision project with Prof Takeo Kanade at the Quality of Life Technology Center and probably most excitingly building the backup gyro stabilized LIDAR gimbal which was used on Redteam's Sandstorm in the first DARPA Grand Challenge. Kevin was Director of Advanced Concepts at Seagate R&D in Singapore and is a EEE Grad of the National University of Singapore and Nanyang Technological University.

^TOP


SDI / ISTC SEMINAR QUESTIONS?
Karen Lindenfelser, 86716, or visit www.pdl.cmu.edu/SDI/