Search Results for author: Junyin Ye

Found 4 papers, 2 papers with code

Any-step Dynamics Model Improves Future Predictions for Online and Offline Reinforcement Learning

no code implementations27 May 2024 Haoxin Lin, Yu-Yan Xu, Yihao Sun, Zhilong Zhang, Yi-Chen Li, Chengxing Jia, Junyin Ye, Jiaji Zhang, Yang Yu

In the online setting, ADMPO-ON demonstrates improved sample efficiency compared to previous state-of-the-art methods.

Episodic Return Decomposition by Difference of Implicitly Assigned Sub-Trajectory Reward

1 code implementation17 Dec 2023 Haoxin Lin, Hongqiu Wu, Jiaji Zhang, Yihao Sun, Junyin Ye, Yang Yu

Real-world decision-making problems are usually accompanied by delayed rewards, which affects the sample efficiency of Reinforcement Learning, especially in the extremely delayed case where the only feedback is the episodic reward obtained at the end of an episode.

Decision Making

Model-Bellman Inconsistency for Model-based Offline Reinforcement Learning

2 code implementations PMLR 2023 Yihao Sun, Jiaji Zhang, Chengxing Jia, Haoxin Lin, Junyin Ye, Yang Yu

MOBILE conducts uncertainty quantification through the inconsistency of Bellman estimations under an ensemble of learned dynamics models, which can be a better approximator to the true Bellman error, and penalizes the Bellman estimation based on this uncertainty.

D4RL Offline RL +3

Cannot find the paper you are looking for? You can Submit a new open access paper.