no code implementations • 17 Apr 2024 • Ameesh Shah, Cameron Voloshin, Chenxi Yang, Abhinav Verma, Swarat Chaudhuri, Sanjit A. Seshia
In this work, we present Cycle Experience Replay (CyclER), a reward-shaping approach to this problem that allows continuous state and action spaces and the use of function approximations.
no code implementations • 19 Jul 2023 • Ameesh Shah, Marcell Vazquez-Chanlatte, Sebastian Junges, Sanjit A. Seshia
Active learning is a well-studied approach to learning formal specifications, such as automata.
no code implementations • 15 Jun 2023 • Niklas Lauffer, Ameesh Shah, Micah Carroll, Michael Dennis, Stuart Russell
We apply this algorithm to analyze the strategically relevant information for tasks in both a standard and a partially observable version of the Overcooked environment.
no code implementations • 29 Mar 2023 • Ameesh Shah, Jonathan DeCastro, John Gideon, Beyazit Yalcinkaya, Guy Rosman, Sanjit A. Seshia
Advancements in simulation and formal methods-guided environment sampling have enabled the rigorous evaluation of machine learning models in a number of safety-critical scenarios, such as autonomous driving.
1 code implementation • 20 Dec 2021 • Marcell Vazquez-Chanlatte, Ameesh Shah, Gil Lederman, Sanjit A. Seshia
This paper considers the problem of learning temporal task specifications, e. g. automata and temporal logic, from expert demonstrations.
1 code implementation • NeurIPS 2020 • Ameesh Shah, Eric Zhan, Jennifer J. Sun, Abhinav Verma, Yisong Yue, Swarat Chaudhuri
This relaxed program is differentiable and can be trained end-to-end, and the resulting training loss is an approximately admissible heuristic that can guide the combinatorial search.
no code implementations • ICLR 2019 • Joshua J. Michalenko, Ameesh Shah, Abhinav Verma, Swarat Chaudhuri, Ankit B. Patel
We study the internal representations that a recurrent neural network (RNN) uses while learning to recognize a regular formal language.
no code implementations • 27 Feb 2019 • Joshua J. Michalenko, Ameesh Shah, Abhinav Verma, Richard G. Baraniuk, Swarat Chaudhuri, Ankit B. Patel
We investigate the internal representations that a recurrent neural network (RNN) uses while learning to recognize a regular formal language.