top of page
Understanding the Limitations of Existing Energy-Efficient Design Approaches for Deep Neural Networks
Reference Type:
Conference Paper
Chen, Yu-hsin, Tien-Ju Yang, and J. Emer. 2018. “Understanding the Limitations of Existing Energy-Efficient Design Approaches for Deep Neural Networks.” In . https://doi.org/10.1145/nnnnnnn.nnnnnnn
Recent and ongoing work that aim to address limitations to existing Energy-Efficient Design Approaches for Deep Neural Networks are highlighted, namely energy-aware pruning, and a flexible accelerator that is computationally efficient across a wide range of diverse DNNs. Deep neural networks (DNNs) are currently widely used for many artificial intelligence (AI) applications including computer vision, speech recognition, and robotics. While DNNs deliver state-ofthe-art accuracy on many AI tasks, it comes at the cost of high computational complexity. Accordingly, there has been a significant amount of research on the topic of energy-efficient processing of DNNs, from the design of efficient DNN algorithms to the design of efficient DNN processors. However, in surveying these techniques, we found that there were certain limitations to the approaches used in this large body of work that need to be addressed. First, the number of weights and MACs are not sufficient for evaluating the energy consumption of DNNs; rather than focusing of weights and MACs, designers of efficient DNN algorithms should more directly target energy and incorporate that into their design. Second, the wide range techniques used for efficient DNN algorithm design has resulted in a more diverse set of DNNs, and the DNN hardware used to process these DNNs should be sufficiently flexible to support these techniques efficiently. Many of the existing DNN processors rely on certain properties of the DNN which cannot be guaranteed (e.g., fixed weight sparsity, large number of channels, large batch size). In this work, we highlight recent and ongoing work that aim to address these limitations, namely energy-aware pruning, and a flexible accelerator (Eyeriss v2) that is computationally efficient across a wide range of diverse DNNs. ACM Reference Format: Yu-Hsin Chen, Tien-Ju Yang, Joel Emer, and Vivienne Sze. 2018. Understanding the Limitations of Existing Energy-Efficient Design Approaches for Deep Neural Networks. In Proceedings of SysML Conference (SYSML’18). ACM, New York, NY, USA, 3 pages. https://doi.org/10.1145/nnnnnnn.nnnnnnn
Download Reference:
Search for the Publication In:
Formatted Reference:
bottom of page