top of page

A method to estimate the energy consumption of deep neural networks

Reference Type: 

Conference Paper

Yang, Tien-Ju, Yu-Hsin Chen, Joel Emer, and Vivienne Sze. 2017. “A Method to Estimate the Energy Consumption of Deep Neural Networks.” In 2017 51st Asilomar Conference on Signals, Systems, and Computers, 1916–20. https://doi.org/10.1109/ACSSC.2017.8335698.

Deep Neural Networks (DNNs) have enabled state-of-the-art accuracy on many challenging artificial intelligence tasks. While most of the computation currently resides in the cloud, it is desirable to embed DNN processing locally near the sensor due to privacy, security, and latency concerns or limitations in communication bandwidth. Accordingly, there has been increasing interest in the research community to design energy-efficient DNNs. However, estimating energy consumption from the DNN model is much more difficult than other metrics such as storage cost (model size) and throughput (number of operations). This is due to the fact that a significant portion of the energy is consumed by data movement, which is difficult to extract directly from the DNN model. This work proposes an energy estimation methodology that can estimate the energy consumption of a DNN based on its architecture, sparsity, and bitwidth. This methodology can be used to evaluate the various DNN architectures and energy-efficient techniques that are currently being proposed in the field and guide the design of energy-efficient DNNs. We have released an online version of the energy estimation tool at energyestimation.mit.edu. We believe that this method will play a critical role in bridging the gap between algorithm and hardware design and provide useful insights for the development of energy-efficient DNNs.

Download Reference:

Search for the Publication In:

Formatted Reference:

bottom of page