no code implementations • 23 Apr 2024 • Clément Christophe, Praveen K Kanithi, Prateek Munjal, Tathagata Raha, Nasir Hayat, Ronnie Rajan, Ahmed Al-Mahrooqi, Avani Gupta, Muhammad Umar Salman, Gurpreet Gosal, Bhargav Kanakiya, Charles Chen, Natalia Vassilieva, Boulbaba Ben Amor, Marco AF Pimentel, Shadab Khan
This study presents a comprehensive analysis and comparison of two predominant fine-tuning methodologies - full-parameter fine-tuning and parameter-efficient tuning - within the context of medical Large Language Models (LLMs).
no code implementations • 4 Apr 2020 • Ahmed H. Shahin, Prateek Munjal, Ling Shao, Shadab Khan
We propose a novel approach for effectively encoding the user input from extreme points and corrective clicks, in a novel and scalable manner that allows the network to work with a variable number of clicks, including corrective clicks for output refinement.
2 code implementations • CVPR 2022 • Prateek Munjal, Nasir Hayat, Munawar Hayat, Jamshid Sourati, Shadab Khan
Finally, we conclude with a set of recommendations on how to assess the results using a new AL algorithm to ensure results are reproducible and robust under changes in experimental conditions.
Ranked #6 on Active Learning on CIFAR10 (10,000)
no code implementations • 28 Sep 2019 • Prateek Munjal, Akanksha Paul, Narayanan C. Krishnan
In this work we introduce a novel hybrid architecture, Implicit Discriminator in Variational Autoencoder (IDVAE), that combines a VAE and a GAN, which does not need an explicit discriminator network.
no code implementations • CVPR 2019 • Akanksha Paul, Narayanan C. Krishnan, Prateek Munjal
It overcomes the hubness problem by learning a latent space that preserves the semantic relationship between the labels while encoding the discriminating information about the classes.