Wednesday October 10 , 2007 -- NOTE SPECIAL DAY
12:00 pm - 1:00 pm
PLACE: CIC 1301
John T. Daly
Los Alamos National Lab
Performance Challenges for Extreme Scale Computing
For scientific application users of large HPC production systems, the total time to solution is the most relevant metric for evaluating system performance. Unfortunately, the traditional metrics of application performance and parallel scaling only tell part of the story for very large systems where reliability becomes a factor in determining the overall performance of large and long-running applications. Additionally, the types of reliability metrics used by the HPC community for evaluating a platform may be overly simplistic or even misleading for determining the application throughput of the system.
John Daly is a technical staff member in the High Performance Computing (HPC) division at the Los Alamos National Laboratory. He has distinguished himself as one of the largest single users of compute cycles in the country through his work running large-scale scientific calculations on the Red Storm, Purple, and BG/L platforms in tandem and consuming in excess of a half-million processor hours a day. The remainder of his time is spent developing metrics and methodologies for measuring, modeling, and optimizing the performance and resilience aspects of extreme scale HPC systems from the application's perspective. He holds degrees from Caltech and Princeton University, where he studied computational fluid dynamics under Antony Jameson.
Seminar Hosts: Garth Gibson, Bianca Schroeder
Visitor Coordinator: Angela Miller
or visit http://www.pdl.cmu.edu/SDI/