PDL Abstract

μC-States: Fine-grained GPU Datapath Power Management

Proceedings of the The 25th International Conference on Parallel Architectures and Compilation Techniques (PACT 2016), Haifa, Israel, September 2016.

Onur Kayıran1, Adwait Jog2, Ashutosh Pattnaik3, Rachata Ausavarungnirun4, Xulong Tang3,
Mahmut T. Kandemir3, Gabriel H. Loh1, Onur Mutlu5,4, Chita R. Das3

1 Advanced Micro Devices, Inc.
2 College of William and Mary
3 Pennsylvania State University
4 Carnegie Mellon University
5 ETH Zürich

To improve the performance of Graphics Processing Units (GPUs) beyond simply increasing core count, architects are recently adopting a scale-up approach: the peak throughput and individual capabilities of the GPU cores are increasing rapidly. This big-core trend in GPUs leads to various challenges, including higher static power consumption and lower and imbalanced utilization of the datapath components of a big core. As we show in this paper, two key problems ensue: (1) the lower and imbalanced datapath utilization can waste power as an application does not always utilize all portions of the big core datapath, and (2) the use of big cores can lead to application performance degradation in some cases due to the higher memory system contention caused by the more memory requests generated by each big core.

This paper introduces a new analysis of datapath com- ponent utilization in big-core GPUs based on queuing theory principles. Building on this analysis, we introduce a fine-grained dynamic power- and clock-gating mechanism for the entire datapath, called μC-States, which aims to minimize power consumption by turning off or tuning-down datapath components that are not bottlenecks for the perfor- mance of the running application. Our experimental evaluation demonstrates that μC-States significantly reduces both static and dynamic power consumption in a big-core GPU, while also significantly improving the performance of applications affected by high memory system contention. We also show that our analysis of datapath component utilization can guide scheduling and design decisions in a GPU architecture that contains heterogeneous cores.