top of page

Sustainable Supercomputing for AI: GPU Power Capping at HPC Scale

Reference Type: 

Conference Paper

Zhao, Dan, Siddharth Samsi, Joseph McDonald, Baolin Li, David Bestor, Michael Jones, Devesh Tiwari, and Vijay Gadepally. 2023. “Sustainable Supercomputing for AI: GPU Power Capping at HPC Scale.” In Proceedings of the 2023 ACM Symposium on Cloud Computing, 588–96. Santa Cruz CA USA: ACM. https://doi.org/10.1145/3620678.3624793

As research and deployment of AI grows, the computational burden to support and sustain its progress inevitably does too. To train or fine-tune state-of-the-art models in NLP, computer vision, etc., some form of AI hardware acceleration is virtually a requirement. Recent large language models require considerable resources to train and deploy, resulting in significant energy usage, potential carbon emissions, and massive demand for GPUs and other hardware accelerators. However, this surge carries large implications for energy sustainability at the HPC/datacenter level. In this paper, we study the effects of power-capping GPUs at a research supercomputing center on GPU temperature and power draw; we show significant decreases in both temperature and power draw, reducing power consumption and potentially improving hardware life-span, with minimal impact on job performance. To our knowledge, our work is the first to conduct and make available a detailed analysis of the effects of GPU power-capping at the supercomputing scale. We hope our work will inspire HPCs/datacenters to further explore, evaluate, and communicate the impact of powercapping AI hardware accelerators for more sustainable AI.

Download Reference:

Search for the Publication In:

Formatted Reference:

bottom of page