Trading-Off Accuracy and Energy of Deep Inference on Embedded Systems: A Co-Design Approach

Deep neural networks have seen tremendous success for different modalities of data including images, videos, and speech. This success has led to their deployment in mobile and embedded systems for real-time applications. However, making repeated inferences using deep networks on embedded systems poses significant challenges due to constrained resources (e.g., energy and computing power). To address these challenges, we develop a principled co-design approach. Building on prior work, we develop a formalism referred as coarse-to-fine networks (C2F Nets) that allow us to employ classifiers of varying complexity to make predictions. We propose a principled optimization algorithm to automatically configure C2F Nets for a specified tradeoff between accuracy and energy consumption for inference. The key idea is to select a classifier on-the-fly whose complexity is proportional to the hardness of the input example: simple classifiers for easy inputs and complex classifiers for hard inputs. We perform comprehensive experimental evaluation using four different C2F Net architectures on multiple real-world image classification tasks. Our results show that optimized C2F Net can reduce the energy delay product by 27% to 60% with no loss in accuracy when compared to the baseline solution, where all predictions are made using the most complex classifier in C2F Net.

Jayakodi, Nitthilan Kannappan, Anwesha Chatterjee, Wonje Choi, Janardhan Rao Doppa, and Partha Pratim Pande. 2018. “Trading-Off Accuracy and Energy of Deep Inference on Embedded Systems: A Co-Design Approach.” IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems 37 (11): 2881–93.

Formatted Reference:

Download Reference:

Search for the Publication In: