The Data Center Observatory (DCO) is a centerpiece of Carnegie Mellon’s attack on ever-growing data center operational costs. As a data center, it will provide a computation and storage utility to resource-hungry research activities such as data mining, design simulation, network intrusion detection, and visualization. As an observatory, it will provide invaluable real data to systems researchers seeking to understand the sources of operational costs and to evaluate novel solutions. Combining the two builds on Carnegie Mellon's tradition of actively using and show-casing new computing approaches, even as we invent them, allowing us to push the frontiers and stay at the forefront of technology. Take a virtual tour!
The DCO enables us to target and attack the real challenges of data centers. First, it allows us to tackle one of the largest IT challenges of our time: data center operational costs, providing us a live environment to analyze, exposing time losses and resource costs. The DCO has been designed with detailed monitoring instrumentation at every level, on power/cooling,on the software systems and on human administration time. Researchers will be able to create detailed breakdowns over the long-term and at particular points in time (e.g., during failure events), as well as drive automated problem detection and diagnosis research. Second, the DCO provides a real environment in which to test reasonably mature new technologies and measure how well they work. Without the DCO, researchers are left with little evidence to corroborate new ideas beyond argument, leading many to work on other problems.
The DCO enables a broad range of research activities, beyond “simply” measuring and understanding operational costs. A few initial thrusts include:
(1) Adaptive power management. Carnegie Mellon teamed with APC to take advantage of their novel hot air containment approach, achieving energy savings from the beginning. We are also teaming with APC to explore new approaches to dynamically controlling which computers are on and off, based on application demands, saving energy at every level of the system.
(2) Automated storage management. Carnegie Mellon’s Parallel Data Lab (PDL) has been developing new architectures and tools for mitigating storage administration costs, and the Self-* Storage system that they are building will provide the DCO’s storage.
(3) Finger pointing in large-scale systems. Diagnosing problems is a notoriously difficult problem in data center environments, and we are combining the DCO’s detailed instrumentation with on-line and off-line machine learning tools to automate the process of identifying the root causes of problems that arise
, PDL Director
, PDL Executive Director
, PDL Business Administrator
Carnegie Mellon University
5000 Forbes Avenue
CIC Building, Second Floor
Pittsburgh, PA 15213-3891