Toward Runtime-Throttleable Neural Networks

30 May 2019  ·  Jesse Hostetler ·

As deep neural network (NN) methods have matured, there has been increasing interest in deploying NN solutions to "edge computing" platforms such as mobile phones or embedded controllers. These platforms are often resource-constrained, especially in energy storage and power, but state-of-the-art NN architectures are designed with little regard for resource use. Existing techniques for reducing the resource footprint of NN models produce static models that occupy a single point in the trade-space between performance and resource use. This paper presents an approach to creating runtime-throttleable NNs that can adaptively balance performance and resource use in response to a control signal. Throttleable networks allow intelligent resource management, for example by allocating fewer resources in "easy" conditions or when battery power is low. We describe a generic formulation of throttling via block-level gating, apply it to create throttleable versions of several standard CNN architectures, and demonstrate that our approach allows smooth performance throttling over a wide range of operating points in image classification and object detection tasks, with only a small loss in peak accuracy.

PDF Abstract

Datasets


Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here