no code implementations • 26 Feb 2024 • Leonid Boytsov, Ameya Joshi, Filipe Condessa
By training them using a small learning rate for about one epoch, we obtained models that retained the accuracy of the backbone classifier while being unusually resistant to gradient attacks including APGD and FAB-T attacks from the AutoAttack package, which we attributed to gradient masking.
1 code implementation • 6 Oct 2023 • Naren Dhyani, Jianqiao Mo, Minsu Cho, Ameya Joshi, Siddharth Garg, Brandon Reagen, Chinmay Hegde
The Vision Transformer (ViT) architecture has emerged as the backbone of choice for state-of-the-art deep models for computer vision applications.
1 code implementation • 7 Aug 2023 • Benjamin Feuer, Ameya Joshi, Minh Pham, Chinmay Hegde
To our knowledge, this is the first result showing (near) state-of-the-art distributional robustness on limited data budgets.
1 code implementation • 17 Jul 2023 • Sudipta Banerjee, Govind Mittal, Ameya Joshi, Chinmay Hegde, Nasir Memon
The performance of automated face recognition systems is inevitably impacted by the facial aging process.
1 code implementation • 16 Jun 2023 • Md Zahid Hasan, Jiajing Chen, Jiyang Wang, Mohammed Shaiqur Rahman, Ameya Joshi, Senem Velipasalar, Chinmay Hegde, Anuj Sharma, Soumik Sarkar
Our results show that this framework offers state-of-the-art performance on zero-shot transfer and video-based CLIP for predicting the driver's state on two public datasets.
1 code implementation • 14 Jun 2023 • Kelly O. Marshall, Minh Pham, Ameya Joshi, Anushrut Jignasu, Aditya Balu, Adarsh Krishnamurthy, Chinmay Hegde
Current state-of-the-art methods for text-to-shape generation either require supervised training using a labeled dataset of pre-defined 3D shapes, or perform expensive inference-time optimization of implicit neural representations.
1 code implementation • 13 Oct 2022 • Benjamin Feuer, Ameya Joshi, Chinmay Hegde
Vision language (VL) models like CLIP are robust to natural distribution shifts, in part because CLIP learns on unstructured data using a technique called caption supervision; the model inteprets image-linked texts as ground-truth labels.
no code implementations • 17 Jun 2022 • Minh Pham, Minsu Cho, Ameya Joshi, Chinmay Hegde
We first show that even with a highly accurate teacher, self-distillation allows a student to surpass the teacher in all cases.
no code implementations • 15 Jun 2022 • Benjamin Feuer, Ameya Joshi, Chinmay Hegde
State-of-the-art image classifiers trained on massive datasets (such as ImageNet) have been shown to be vulnerable to a range of both intentional and incidental distribution shifts.
no code implementations • 12 May 2022 • Ameya Joshi, Minh Pham, Minsu Cho, Leonid Boytsov, Filipe Condessa, J. Zico Kolter, Chinmay Hegde
Randomized smoothing (RS) has been shown to be a fast, scalable technique for certifying the robustness of deep neural network classifiers.
1 code implementation • 4 Feb 2022 • Minsu Cho, Ameya Joshi, Siddharth Garg, Brandon Reagen, Chinmay Hegde
To reduce PI latency we propose a gradient-based algorithm that selectively linearizes ReLUs while maintaining prediction accuracy.
no code implementations • 8 Oct 2021 • Ameya Joshi, Gauri Jagatap, Chinmay Hegde
Vision transformers rely on a patch token based self attention mechanism, in contrast to convolutional networks.
no code implementations • NeurIPS 2021 • Minsu Cho, Aditya Balu, Ameya Joshi, Anjana Deva Prasad, Biswajit Khara, Soumik Sarkar, Baskar Ganapathysubramanian, Adarsh Krishnamurthy, Chinmay Hegde
Overall, we show that leveraging this redesigned Jacobian in the form of a differentiable "layer" in predictive models leads to improved performance in diverse applications such as image segmentation, 3D point cloud reconstruction, and finite element analysis.
no code implementations • 4 Oct 2021 • Biswajit Khara, Aditya Balu, Ameya Joshi, Soumik Sarkar, Chinmay Hegde, Adarsh Krishnamurthy, Baskar Ganapathysubramanian
We consider a mesh-based approach for training a neural network to produce field predictions of solutions to parametric partial differential equations (PDEs).
no code implementations • NeurIPS Workshop LMCA 2020 • Minsu Cho, Ameya Joshi, Xian Yeow Lee, Aditya Balu, Adarsh Krishnamurthy, Baskar Ganapathysubramanian, Soumik Sarkar, Chinmay Hegde
The paradigm of differentiable programming has considerably enhanced the scope of machine learning via the judicious use of gradient-based optimization.
no code implementations • ICML Workshop AML 2021 • Gauri Jagatap, Ameya Joshi, Animesh Basak Chowdhury, Siddharth Garg, Chinmay Hegde
In this paper we propose a new family of algorithms, ATENT, for training adversarially robust deep neural networks.
no code implementations • 24 Jul 2020 • Sergio Botelho, Ameya Joshi, Biswajit Khara, Soumik Sarkar, Chinmay Hegde, Santi Adavani, Baskar Ganapathysubramanian
Here we report on a software framework for data parallel distributed deep learning that resolves the twin challenges of training these large SciML models - training in reasonable time as well as distributing the storage requirements.
1 code implementation • 28 Jun 2020 • Minsu Cho, Ameya Joshi, Chinmay Hegde
Deep neural networks are often highly overparameterized, prohibiting their use in compute-limited systems.
no code implementations • 4 Jun 2019 • Viraj Shah, Ameya Joshi, Sambuddha Ghosal, Balaji Pokuri, Soumik Sarkar, Baskar Ganapathysubramanian, Chinmay Hegde
Reliable training of generative adversarial networks (GANs) typically require massive datasets in order to model complicated distributions.
1 code implementation • ICCV 2019 • Ameya Joshi, Amitangshu Mukherjee, Soumik Sarkar, Chinmay Hegde
We propose a novel approach to generate such `semantic' adversarial examples by optimizing a particular adversarial loss over the range-space of a parametric conditional generative model.