2 code implementations • 16 Apr 2021 • Asit Mishra, Jorge Albericio Latorre, Jeff Pool, Darko Stosic, Dusan Stosic, Ganesh Venkatesh, Chong Yu, Paulius Micikevicius
We present the design and behavior of Sparse Tensor Cores, which exploit a 2:4 (50%) sparsity pattern that leads to twice the math throughput of dense matrix units.
no code implementations • 12 Jun 2018 • Philip Colangelo, Nasibeh Nasiri, Asit Mishra, Eriko Nurvitadhi, Martin Margala, Kevin Nealis
This results in a trade-off between throughput and accuracy and can be tailored for different networks through various combinations of activation and weight data widths.
Distributed, Parallel, and Cluster Computing Hardware Architecture
no code implementations • 1 Mar 2018 • Asit Mishra, Debbie Marr
Today's high performance deep learning architectures involve large models with numerous parameters.
no code implementations • ICLR 2018 • Asit Mishra, Debbie Marr
Low-precision numerics and model compression using knowledge distillation are popular techniques to lower both the compute requirements and memory footprint of these deployed models.
no code implementations • 20 Oct 2017 • Supriya Kapur, Asit Mishra, Debbie Marr
Similar to convolution neural networks, recurrent neural networks (RNNs) typically suffer from over-parameterization.
no code implementations • ICLR 2018 • Asit Mishra, Eriko Nurvitadhi, Jeffrey J Cook, Debbie Marr
We reduce the precision of activation maps (along with model parameters) and increase the number of filter maps in a layer, and find that this scheme matches or surpasses the accuracy of the baseline full-precision network.
no code implementations • 10 Apr 2017 • Asit Mishra, Jeffrey J Cook, Eriko Nurvitadhi, Debbie Marr
For computer vision applications, prior works have shown the efficacy of reducing the numeric precision of model parameters (network weights) in deep neural networks but also that reducing the precision of activations hurts model accuracy much more than reducing the precision of model parameters.