no code implementations • ICML 2020 • Kelvin Guu, Kenton Lee, Zora Tung, Panupong Pasupat, Ming-Wei Chang
Language model pre-training has been shown to capture a surprising amount of world knowledge, crucial for NLP tasks such as question answering.
no code implementations • 21 Jan 2024 • Man Luo, Xin Xu, Yue Liu, Panupong Pasupat, Mehran Kazemi
Language models, especially pre-trained large language models, have showcased remarkable abilities as few-shot in-context learners (ICL), adept at adapting to new tasks with just a few demonstrations in the input context.
no code implementations • 3 Oct 2023 • Michihiro Yasunaga, Xinyun Chen, Yujia Li, Panupong Pasupat, Jure Leskovec, Percy Liang, Ed H. Chi, Denny Zhou
Chain-of-thought (CoT) prompting for language models demonstrates impressive performance across reasoning tasks, but typically needs labeled exemplars of the reasoning process.
1 code implementation • NeurIPS 2023 • Peter Shaw, Mandar Joshi, James Cohan, Jonathan Berant, Panupong Pasupat, Hexiang Hu, Urvashi Khandelwal, Kenton Lee, Kristina Toutanova
Much of the previous work towards digital agents for graphical user interfaces (GUIs) has relied on text-based representations (derived from HTML or other structured data sources), which are not always readily available.
no code implementations • 24 May 2023 • Anthony Chen, Panupong Pasupat, Sameer Singh, Hongrae Lee, Kelvin Guu
These bottlenecks motivate the training of compact editors, which is challenging due to the scarcity of training data for this purpose.
no code implementations • 23 May 2023 • Man Luo, Xin Xu, Zhuyun Dai, Panupong Pasupat, Mehran Kazemi, Chitta Baral, Vaiva Imbrasaite, Vincent Y Zhao
In-context learning (ICL), teaching a large language model (LLM) to perform a task with few-shot demonstrations rather than adjusting the model parameters, has emerged as a strong paradigm for using LLMs.
no code implementations • 5 Dec 2022 • Kevin Clark, Kelvin Guu, Ming-Wei Chang, Panupong Pasupat, Geoffrey Hinton, Mohammad Norouzi
Dynamic evaluation of language models (LMs) adapts model parameters at test time using gradient information from previous tokens and substantially improves LM performance.
1 code implementation • 17 Oct 2022 • Luyu Gao, Zhuyun Dai, Panupong Pasupat, Anthony Chen, Arun Tejasvi Chaganty, Yicheng Fan, Vincent Y. Zhao, Ni Lao, Hongrae Lee, Da-Cheng Juan, Kelvin Guu
Language models (LMs) now excel at many tasks such as few-shot learning, question answering, reasoning, and dialog.
no code implementations • COLING 2022 • Yury Zemlyanskiy, Michiel de Jong, Joshua Ainslie, Panupong Pasupat, Peter Shaw, Linlu Qiu, Sumit Sanghai, Fei Sha
Then, it retrieves exemplars with outputs similar to the preliminary prediction which are used to generate a final prediction.
no code implementations • 24 May 2022 • Linlu Qiu, Peter Shaw, Panupong Pasupat, Tianze Shi, Jonathan Herzig, Emily Pitler, Fei Sha, Kristina Toutanova
Meanwhile, recent work has shown considerable improvements on many NLP tasks from model scaling.
2 code implementations • NAACL 2022 • Linlu Qiu, Peter Shaw, Panupong Pasupat, Paweł Krzysztof Nowak, Tal Linzen, Fei Sha, Kristina Toutanova
Generic unstructured neural networks have been shown to struggle on out-of-distribution compositional generalization.
1 code implementation • EMNLP 2021 • Panupong Pasupat, Yuan Zhang, Kelvin Guu
In practical applications of semantic parsing, we often want to rapidly change the behavior of the parser, such as enabling it to handle queries in a new domain, or changing its predictions on certain targeted queries.
no code implementations • Findings (EMNLP) 2021 • Jeremy R. Cole, Nanjiang Jiang, Panupong Pasupat, Luheng He, Peter Shaw
The dominant paradigm for semantic parsing in recent years is to formulate parsing as a sequence-to-sequence task, generating predictions with auto-regressive sequence decoders.
no code implementations • ACL 2021 • Xinya Du, Luheng He, Qi Li, Dian Yu, Panupong Pasupat, Yuan Zhang
To address this problem, we introduce QA-driven slot filling (QASF), which extracts slot-filler spans from utterances with a span-based QA model.
2 code implementations • 15 Apr 2021 • Jonathan Herzig, Peter Shaw, Ming-Wei Chang, Kelvin Guu, Panupong Pasupat, Yuan Zhang
Sequence-to-sequence (seq2seq) models are prevalent in semantic parsing, but have been found to struggle at out-of-distribution compositional generalization.
Ranked #3 on Semantic Parsing on CFQ
no code implementations • NAACL 2021 • Dian Yu, Luheng He, Yuan Zhang, Xinya Du, Panupong Pasupat, Qi Li
Few-shot learning arises in important practical scenarios, such as when a natural language understanding system needs to learn new semantic labels for an emerging, resource-scarce domain.
1 code implementation • ACL 2021 • Peter Shaw, Ming-Wei Chang, Panupong Pasupat, Kristina Toutanova
This has motivated new specialized architectures with stronger compositional biases, but most of these approaches have only been evaluated on synthetically-generated datasets, which are not representative of natural language variation.
1 code implementation • 12 Jul 2020 • Evan Zheran Liu, Ramtin Keramati, Sudarshan Seshadri, Kelvin Guu, Panupong Pasupat, Emma Brunskill, Percy Liang
Model-based reinforcement learning (RL) is appealing because (i) it enables planning and thus more strategic exploration, and (ii) by decoupling dynamics from rewards, it enables fast transfer to new reward functions.
6 code implementations • 10 Feb 2020 • Kelvin Guu, Kenton Lee, Zora Tung, Panupong Pasupat, Ming-Wei Chang
Language model pre-training has been shown to capture a surprising amount of world knowledge, crucial for NLP tasks such as question answering.
Ranked #9 on Question Answering on WebQuestions
no code implementations • IJCNLP 2019 • Panupong Pasupat, Sonal Gupta, M, Karishma yam, Rushin Shah, Mike Lewis, Luke Zettlemoyer
We propose a semantic parser for parsing compositional utterances into Task Oriented Parse (TOP), a tree representation that has intents and slots as labels of nesting tree nodes.
1 code implementation • NeurIPS 2019 • Sumith Kulal, Panupong Pasupat, Kartik Chandra, Mina Lee, Oded Padon, Alex Aiken, Percy Liang
Given test cases as a mechanism to validate programs, we search over the space of possible translations of the pseudocode to find a program that passes the validation.
Ranked #2 on Program Synthesis on SPoC TestP
no code implementations • ICLR 2019 • Evan Zheran Liu, Ramtin Keramati, Sudarshan Seshadri, Kelvin Guu, Panupong Pasupat, Emma Brunskill, Percy Liang
In our approach, a manager maintains an abstract MDP over a subset of the abstract states, which grows monotonically through targeted exploration (possible due to the abstract MDP).
no code implementations • 15 Feb 2019 • Arash Einolghozati, Panupong Pasupat, Sonal Gupta, Rushin Shah, Mrinal Mohit, Mike Lewis, Luke Zettlemoyer
Semantic parsing using hierarchical representations has recently been proposed for task oriented dialog with promising results [Gupta et al 2018].
2 code implementations • EMNLP 2018 • Panupong Pasupat, Tian-Shun Jiang, Evan Zheran Liu, Kelvin Guu, Percy Liang
The web provides a rich, open-domain environment with textual, structural, and spatial properties.
4 code implementations • ICLR 2018 • Evan Zheran Liu, Kelvin Guu, Panupong Pasupat, Tianlin Shi, Percy Liang
Reinforcement learning (RL) agents improve through trial-and-error, but when reward is sparse and the agent cannot discover successful action sequences, learning stagnates.
2 code implementations • EMNLP 2017 • Yuchen Zhang, Panupong Pasupat, Percy Liang
To learn a semantic parser from denotations, a learning algorithm must search over a combinatorially large space of logical forms for ones consistent with the annotated denotations.
3 code implementations • ACL 2017 • Kelvin Guu, Panupong Pasupat, Evan Zheran Liu, Percy Liang
Our goal is to learn a semantic parser that maps natural language utterances into executable programs when only indirect supervision is available: examples are labeled with the correct execution result, but not the program itself.
2 code implementations • ACL 2016 • Panupong Pasupat, Percy Liang
A core problem in learning semantic parsers from denotations is picking out consistent logical forms--those that yield the correct denotation--from a combinatorially large space.
1 code implementation • ACL 2016 • Reginald Long, Panupong Pasupat, Percy Liang
With only denotations at training time, we must search over a combinatorially large space of logical forms, which is even larger with context-dependent utterances.
4 code implementations • IJCNLP 2015 • Panupong Pasupat, Percy Liang
Two important aspects of semantic parsing for question answering are the breadth of the knowledge source and the depth of logical compositionality.