no code implementations • 18 Mar 2023 • Vithursan Thangarasa, Abhay Gupta, William Marshall, Tianda Li, Kevin Leong, Dennis Decoste, Sean Lie, Shreyas Saxena
In this work, we show the benefits of using unstructured weight sparsity to train only a subset of weights during pre-training (Sparse Pre-training) and then recover the representational capacity by allowing the zeroed weights to learn (Dense Fine-tuning).
1 code implementation • 28 Jun 2022 • Vitaliy Chiley, Vithursan Thangarasa, Abhay Gupta, Anshul Samar, Joel Hestness, Dennis Decoste
However, training them requires substantial accelerator memory for saving large, multi-resolution activations.
Ranked #314 on Image Classification on <h2>oi</h2> (using extra training data)
no code implementations • 27 May 2021 • Shreyas Saxena, Nidhi Vyas, Dennis Decoste
This setting is widely adopted under the assumption that loss functions for each instance are similar in nature, and hence, a common learning rate can be used.
no code implementations • 19 Apr 2021 • Mihir Pendse, Vithursan Thangarasa, Vitaliy Chiley, Ryan Holmdahl, Joel Hestness, Dennis Decoste
The inverted residual bottleneck block uses lightweight depthwise separable convolutions to reduce computation by decomposing convolutions into a pointwise convolution and a depthwise convolution.
no code implementations • ICLR 2020 • Vipul Gupta, Santiago Akle Serrano, Dennis Decoste
We propose Stochastic Weight Averaging in Parallel (SWAP), an algorithm to accelerate DNN training.
1 code implementation • NeurIPS 2019 • Shreyas Saxena, Oncel Tuzel, Dennis Decoste
To the best of our knowledge, our work is the first curriculum learning method to show gains on large scale image classification and detection tasks.
no code implementations • ICCV 2015 • Zhicheng Yan, Hao Zhang, Robinson Piramuthu, Vignesh Jagadeesh, Dennis Decoste, Wei Di, Yizhou Yu
In this paper, we introduce hierarchical deep CNNs (HD-CNNs) by embedding deep CNNs into a category hierarchy.
no code implementations • 20 Dec 2014 • Kevin Bache, Dennis Decoste, Padhraic Smyth
We describe a general framework for online adaptation of optimization hyperparameters by `hot swapping' their values during learning.
4 code implementations • 3 Oct 2014 • Zhicheng Yan, Hao Zhang, Robinson Piramuthu, Vignesh Jagadeesh, Dennis Decoste, Wei Di, Yizhou Yu
In this paper, we introduce hierarchical deep CNNs (HD-CNNs) by embedding deep CNNs into a category hierarchy.
Ranked #174 on Image Classification on CIFAR-100
no code implementations • 22 Apr 2014 • Raffay Hamid, Atish Das Sarma, Dennis Decoste, Neel Sundaresan
We identify a novel instance of the background subtraction problem that focuses on extracting near-field foreground objects captured using handheld cameras.
no code implementations • 17 Dec 2013 • Raffay Hamid, Ying Xiao, Alex Gittens, Dennis Decoste
Kernel approximation using randomized feature maps has recently gained a lot of interest.
no code implementations • CVPR 2013 • Raffay Hamid, Dennis Decoste, Chih-Jen Lin
We present a robust and efficient technique for matching dense sets of points undergoing non-rigid spatial transformations.