Search Results for author: Junbo Tan

Found 5 papers, 2 papers with code

AlignIQL: Policy Alignment in Implicit Q-Learning through Constrained Optimization

no code implementations28 May 2024 Longxiang He, Li Shen, Junbo Tan, Xueqian Wang

IDQL reinterprets IQL as an actor-critic method and gets weights of implicit policy, however, this weight only holds for the optimal value function.

Offline Goal-Conditioned Reinforcement Learning for Safety-Critical Tasks with Recovery Policy

1 code implementation4 Mar 2024 Chenyang Cao, Zichen Yan, Renhao Lu, Junbo Tan, Xueqian Wang

Offline goal-conditioned reinforcement learning (GCRL) aims at solving goal-reaching tasks with sparse rewards from an offline dataset.

DiffCPS: Diffusion Model based Constrained Policy Search for Offline Reinforcement Learning

1 code implementation9 Oct 2023 Longxiang He, Li Shen, Linrui Zhang, Junbo Tan, Xueqian Wang

Constrained policy search (CPS) is a fundamental problem in offline reinforcement learning, which is generally solved by advantage weighted regression (AWR).

D4RL Offline RL +1

Data-Driven Robust Control for Discrete Linear Time-Invariant Systems: A Descriptor System Approach

no code implementations14 Mar 2022 Jiabao He, Xuan Zhang, Feng Xu, Junbo Tan, Xueqian Wang

Given the recent surge of interest in data-driven control, this paper proposes a two-step method to study robust data-driven control for a parameter-unknown linear time-invariant (LTI) system that is affected by energy-bounded noises.

Data-Driven Controllability Analysis and Stabilization for Linear Descriptor Systems

no code implementations7 Dec 2021 Jiabao He, Xuan Zhang, Feng Xu, Junbo Tan, Xueqian Wang

For a parameter-unknown linear descriptor system, this paper proposes data-driven methods to testify the system's type and controllability and then to stabilize it.

Cannot find the paper you are looking for? You can Submit a new open access paper.