no code implementations • 1 Feb 2023 • Grace Zhang, Ayush Jain, Injune Hwang, Shao-Hua Sun, Joseph J. Lim
The ability to leverage shared behaviors between tasks is critical for sample-efficient multi-task reinforcement learning (MTRL).
no code implementations • ICLR 2022 • Glen Berseth, Zhiwei Zhang, Grace Zhang, Chelsea Finn, Sergey Levine
Beyond simply transferring past experience to new tasks, our goal is to devise continual reinforcement learning algorithms that learn to learn, using their experience on previous tasks to learn new tasks more quickly.
1 code implementation • 1 Jul 2021 • Grace Zhang, Linghan Zhong, Youngwoon Lee, Joseph J. Lim
In this paper, we propose a novel policy transfer method with iterative "environment grounding", IDAPT, that alternates between (1) directly minimizing both visual and dynamics domain gaps by grounding the source environment in the target environment domains, and (2) training a policy on the grounded source environment.
5 code implementations • 1 Oct 2019 • Xue Bin Peng, Aviral Kumar, Grace Zhang, Sergey Levine
In this paper, we aim to develop a simple and scalable reinforcement learning algorithm that uses standard supervised learning methods as subroutines.
Ranked #1 on OpenAI Gym on Humanoid-v2
no code implementations • 25 Sep 2019 • Xue Bin Peng, Aviral Kumar, Grace Zhang, Sergey Levine
In this paper, we aim to develop a simple and scalable reinforcement learning algorithm that uses standard supervised learning methods as subroutines.
1 code implementation • NeurIPS 2019 • Xue Bin Peng, Michael Chang, Grace Zhang, Pieter Abbeel, Sergey Levine
In this work, we propose multiplicative compositional policies (MCP), a method for learning reusable motor skills that can be composed to produce a range of complex behaviors.