no code implementations • 7 Jun 2023 • Tan H. Nguyen, Dinkar Juyal, Jin Li, Aaditya Prakash, Shima Nofallah, Chintan Shah, Sai Chowdary Gullapally, Limin Yu, Michael Griffin, Anand Sampat, John Abel, Justin Lee, Amaro Taylor-Weiner
Secondly, dependency on domain labels prevents the use of pathology images without domain labels to improve model performance.
no code implementations • 3 May 2023 • Sai Chowdary Gullapally, Yibo Zhang, Nitin Kumar Mittal, Deeksha Kartik, Sandhya Srinivasan, Kevin Rose, Daniel Shenker, Dinkar Juyal, Harshith Padigela, Raymond Biju, Victor Minden, Chirag Maheshwari, Marc Thibault, Zvi Goldstein, Luke Novak, Nidhi Chandra, Justin Lee, Aaditya Prakash, Chintan Shah, John Abel, Darren Fahy, Amaro Taylor-Weiner, Anand Sampat
Machine learning algorithms have the potential to improve patient outcomes in digital pathology.
no code implementations • 3 Jun 2022 • Syed Ashar Javed, Dinkar Juyal, Harshith Padigela, Amaro Taylor-Weiner, Limin Yu, Aaditya Prakash
Our Additive MIL models enable spatial credit assignment such that the contribution of each region in the image can be exactly computed and visualized.
no code implementations • 11 Apr 2022 • Syed Ashar Javed, Dinkar Juyal, Zahil Shanis, Shreya Chakraborty, Harsha Pokkalla, Aaditya Prakash
Machine Learning has been applied to pathology images in research and clinical practice with promising outcomes.
1 code implementation • CVPR 2019 • Aaditya Prakash, James Storer, Dinei Florencio, Cha Zhang
We show that by temporarily pruning and then restoring a subset of the model's filters, and repeating this process cyclically, overlap in the learned features is reduced, producing improved generalization.
no code implementations • 2 Mar 2018 • Aaditya Prakash, Nick Moran, Solomon Garber, Antonella DiLillo, James Storer
As deep neural networks (DNNs) have been integrated into critical systems, several methods to attack these systems have been developed.
no code implementations • NAACL 2018 • Reza Ghaeini, Sadid A. Hasan, Vivek Datla, Joey Liu, Kathy Lee, Ashequl Qadir, Yuan Ling, Aaditya Prakash, Xiaoli Z. Fern, Oladimeji Farri
Instead, we propose a novel dependent reading bidirectional LSTM network (DR-BiLSTM) to efficiently model the relationship between a premise and a hypothesis during encoding and inference.
Ranked #16 on Natural Language Inference on SNLI
3 code implementations • CVPR 2018 • Aaditya Prakash, Nick Moran, Solomon Garber, Antonella DiLillo, James Storer
Despite their robustness to natural variations, image pixel values can be manipulated, via small, carefully crafted, imperceptible perturbations, to cause a model to misclassify images.
3 code implementations • 27 Dec 2016 • Aaditya Prakash, Nick Moran, Solomon Garber, Antonella DiLillo, James Storer
Here, we present a powerful cnn tailored to the specific task of semantic image understanding to achieve higher visual quality in lossy compression.
no code implementations • 6 Dec 2016 • Aaditya Prakash, Siyuan Zhao, Sadid A. Hasan, Vivek Datla, Kathy Lee, Ashequl Qadir, Joey Liu, Oladimeji Farri
We introduce condensed memory neural networks (C-MemNNs), a novel model with iterative condensation of memory representations that preserves the hierarchy of features in the memory.
no code implementations • WS 2016 • Sadid A. Hasan, Bo Liu, Joey Liu, Ashequl Qadir, Kathy Lee, Vivek Datla, Aaditya Prakash, Oladimeji Farri
Paraphrase generation is important in various applications such as search, summarization, and question answering due to its ability to generate textual alternatives while keeping the overall meaning intact.
1 code implementation • COLING 2016 • Aaditya Prakash, Sadid A. Hasan, Kathy Lee, Vivek Datla, Ashequl Qadir, Joey Liu, Oladimeji Farri
To the best of our knowledge, this work is the first to explore deep learning models for paraphrase generation.