no code implementations • EMNLP 2021 • Ayush Jain, Shashank Srivastava
Online forums such as ChangeMyView have been explored to research aspects of persuasion and argumentative quality in language.
no code implementations • SIGDIAL (ACL) 2020 • Ayush Jain, Maria Leonor Pacheco, Steven Lancette, Mahak Goindani, Dan Goldwasser
In this work, we study collaborative online conversations.
no code implementations • 9 Feb 2024 • Brian Yang, Huangyuan Su, Nikolaos Gkanatsios, Tsung-Wei Ke, Ayush Jain, Jeff Schneider, Katerina Fragkiadaki
Diffusion-ES samples trajectories during evolutionary search from a diffusion model and scores them using a black-box reward function.
no code implementations • 6 Feb 2024 • Ayush Jain, Andrea Montanari, Eren Sasoglu
Collecting large quantities of high-quality data is often prohibitively expensive or impractical, and a crucial bottleneck in machine learning.
no code implementations • 1 Feb 2024 • Ayush Jain, Ehsan Haghighat, Sai Nelaturi
This study introduces a two-scale Graph Neural Operator (GNO), namely, LatticeGraphNet (LGN), designed as a surrogate model for costly nonlinear finite-element simulations of three-dimensional latticed parts and structures.
1 code implementation • 4 Jan 2024 • Ayush Jain, Pushkal Katara, Nikolaos Gkanatsios, Adam W. Harley, Gabriel Sarch, Kriti Aggarwal, Vishrav Chaudhary, Katerina Fragkiadaki
The gap in performance between methods that consume posed images versus post-processed 3D point clouds has fueled the belief that 2D and 3D perception require distinct model architectures.
Ranked #1 on 3D Instance Segmentation on ScanNet200
no code implementations • 16 Nov 2023 • Ayush Jain, Marie Laure-Charpignon, Irene Y. Chen, Anthony Philippakis, Ahmed Alaa
Cosine similarity values are computed between (1) all biological pathways starting at the considered drug and ending at the disease of interest and (2) all biological pathways starting at drugs currently prescribed against that disease and ending at the disease of interest.
no code implementations • 5 Sep 2023 • Ayush Jain, Rajat Sen, Weihao Kong, Abhimanyu Das, Alon Orlitsky
A common approach assumes that the sources fall in one of several unknown subgroups, each with an unknown input distribution and input-output relationship.
no code implementations • 3 Aug 2023 • Ayush Jain, Manikandan Padmanaban, Jagabondhu Hazra, Shantanu Godbole, Kommy Weldemariam
Large enterprises face a crucial imperative to achieve the Sustainable Development Goals (SDGs), especially goal 13, which focuses on combating climate change and its impacts.
no code implementations • 27 Apr 2023 • Nikolaos Gkanatsios, Ayush Jain, Zhou Xian, Yunchu Zhang, Christopher Atkeson, Katerina Fragkiadaki
Language is compositional; an instruction can express multiple relation constraints to hold among objects in a scene that a robot is tasked to rearrange.
no code implementations • 1 Feb 2023 • Grace Zhang, Ayush Jain, Injune Hwang, Shao-Hua Sun, Joseph J. Lim
The ability to leverage shared behaviors between tasks is critical for sample-efficient multi-task reinforcement learning (MTRL).
no code implementations • 23 Nov 2022 • Abhimanyu Das, Ayush Jain, Weihao Kong, Rajat Sen
We begin the study of list-decodable linear regression using batches.
2 code implementations • NAACL 2022 • Abhinav Joshi, Ashwani Bhat, Ayush Jain, Atin Vikram Singh, Ashutosh Modi
Emotions are an inherent part of human interactions, and consequently, it is imperative to develop AI systems that understand and recognize human emotions.
Ranked #2 on Multimodal Emotion Recognition on IEMOCAP
no code implementations • 22 Mar 2022 • Mathieu Laurière, Sarah Perrin, Sertan Girgin, Paul Muller, Ayush Jain, Theophile Cabannes, Georgios Piliouras, Julien Pérolat, Romuald Élie, Olivier Pietquin, Matthieu Geist
One limiting factor to further scale up using RL is that existing algorithms to solve MFGs require the mixing of approximated quantities such as strategies or $q$-values.
no code implementations • 17 Mar 2022 • Ravneet Singh Arora, Sreejith Menon, Ayush Jain, Nehil Jain
Instant Search is a paradigm where a search system retrieves answers on the fly while typing.
no code implementations • 15 Feb 2022 • Yi Hao, Ayush Jain, Alon Orlitsky, Vaishakh Ravindrakumar
We derive a near-linear-time and essentially sample-optimal estimator that establishes $c_{t, d}=2$ for all $(t, d)\ne(1, 0)$.
no code implementations • 11 Feb 2022 • Ayush Jain, Alon Orlitsky, Vaishakh Ravindrakumar
However, their vast majority approach optimal accuracy only when given a tight upper bound on the fraction of corrupt data.
1 code implementation • 16 Dec 2021 • Ayush Jain, Nikolaos Gkanatsios, Ishita Mediratta, Katerina Fragkiadaki
We propose a language grounding model that attends on the referential utterance and on the object proposal pool computed from a pre-trained detector to decode referenced objects with a detection head, without selecting them from the pool.
no code implementations • 9 Nov 2021 • Jayadev Acharya, Ayush Jain, Gautam Kamath, Ananda Theertha Suresh, Huanyu Zhang
We study the problem of robustly estimating the parameter $p$ of an Erd\H{o}s-R\'enyi random graph on $n$ nodes, where a $\gamma$ fraction of nodes may be adversarially corrupted.
1 code implementation • ICLR 2022 • Ayush Jain, Norio Kosaka, Kyung-Min Kim, Joseph J Lim
Intelligent agents can solve tasks in a variety of ways depending on the action set at their disposal.
no code implementations • 29 Sep 2021 • Ayush Jain, Nikolaos Gkanatsios, Ishita Mediratta, Katerina Fragkiadaki
Object detectors are typically trained on a fixed vocabulary of objects and attributes that is often too restrictive for open-domain language grounding, where the language utterance may refer to visual entities in various levels of abstraction, such as a cat, the leg of a cat, or the stain on the front leg of the chair.
1 code implementation • 17 Jul 2021 • Ayush Jain, P. K. Srijith, Mohammad Emtiyaz Khan
Deep Gaussian Processes (DGPs) are multi-layer, flexible extensions of Gaussian processes but their training remains challenging.
no code implementations • 25 Jun 2021 • Clément L. Canonne, Ayush Jain, Gautam Kamath, Jerry Li
Specifically, we show the sample complexity to be \[\tilde \Theta\left(\frac{\sqrt{n}}{\varepsilon_2^{2}} + \frac{n}{\log n} \cdot \max \left\{\frac{\varepsilon_1}{\varepsilon_2^2},\left(\frac{\varepsilon_1}{\varepsilon_2^2}\right)^{\!\! 2}\right\}\right),\] providing a smooth tradeoff between the two previously known cases.
1 code implementation • 3 Feb 2021 • Arushi Jain, Gandharv Patil, Ayush Jain, Khimya Khetarpal, Doina Precup
Reinforcement learning algorithms are typically geared towards optimizing the expected return of an agent.
1 code implementation • 28 Jan 2021 • Rohitash Chandra, Ayush Jain, Divyanshu Singh Chauhan
Deep learning models such as recurrent neural networks are well suited for modelling spatiotemporal sequences.
1 code implementation • 30 Nov 2020 • Zhaoyuan Fang, Ayush Jain, Gabriel Sarch, Adam W. Harley, Katerina Fragkiadaki
Experiments on both indoor and outdoor datasets show that (1) our method obtains high-quality 2D and 3D pseudo-labels from multi-view RGB-D data; (2) fine-tuning with these pseudo-labels improves the 2D detector significantly in the test environment; (3) training a 3D detector with our pseudo-labels outperforms a prior self-supervised method by a large margin; (4) given weak supervision, our method can generate better pseudo-labels for novel objects.
2 code implementations • ICML 2020 • Ayush Jain, Andrew Szot, Joseph J. Lim
A fundamental trait of intelligence is the ability to achieve goals in the face of novel circumstances, such as making decisions from new action choices.
no code implementations • NeurIPS 2020 • Ayush Jain, Alon Orlitsky
Many latent-variable applications, including community detection, collaborative filtering, genomic analysis, and NLP, model data as generated by low-rank matrices.
1 code implementation • 30 Mar 2020 • Ayush Jain, Dr. N. M. Meenachi, Dr. B. Venkatraman
This paper contributes to research in understanding nuclear domain knowledge which is then evaluated on Nuclear Question Answering Dataset (NQuAD) created by nuclear domain experts as part of this research.
no code implementations • NeurIPS 2020 • Ayush Jain, Alon Orlitsky
In many applications, data is collected in batches, some of which are corrupt or even adversarial.
no code implementations • NeurIPS 2020 • Yi Hao, Ayush Jain, Alon Orlitsky, Vaishakh Ravindrakumar
Sample- and computationally-efficient distribution estimation is a fundamental tenet in statistics and machine learning.
no code implementations • ICML 2020 • Ayush Jain, Alon Orlitsky
Previous estimators for this setting ran in exponential time, and for some regimes required a suboptimal number of batches.
no code implementations • 11 Sep 2019 • Yuan Liu, Ayush Jain, Clara Eng, David H. Way, Kang Lee, Peggy Bui, Kimberly Kanada, Guilherme de Oliveira Marinho, Jessica Gallegos, Sara Gabriele, Vishakha Gupta, Nalini Singh, Vivek Natarajan, Rainer Hofmann-Wellenhof, Greg S. Corrado, Lily H. Peng, Dale R. Webster, Dennis Ai, Susan Huang, Yun Liu, R. Carter Dunn, David Coz
In this paper, we developed a deep learning system (DLS) to provide a differential diagnosis of skin conditions for clinical cases (skin photographs and associated medical histories).
no code implementations • WS 2018 • Ayush Jain, Vishal Singh, Sidharth Ranjan, Rajakrishnan Rajkumar, Sumeet Agarwal
According to the UNIFORM INFORMATION DENSITY (UID) hypothesis (Levy and Jaeger, 2007; Jaeger, 2010), speakers tend to distribute information density across the signal uniformly while producing language.
no code implementations • ICML 2018 • Moein Falahatgar, Ayush Jain, Alon Orlitsky, Venkatadheeraj Pichapati, Vaishakh Ravindrakumar
We present a comprehensive understanding of three important problems in PAC preference learning: maximum selection (maxing), ranking, and estimating all pairwise preference probabilities, in the adaptive setting.