Search Results for author: Fengshuo Bai

Found 5 papers, 0 papers with code

Efficient Preference-based Reinforcement Learning via Aligned Experience Estimation

no code implementations29 May 2024 Fengshuo Bai, Rui Zhao, Hongming Zhang, Sijia Cui, Ying Wen, Yaodong Yang, Bo Xu, Lei Han

To boost the learning loop, we propose SEER, an efficient PbRL method that integrates label smoothing and policy regularization techniques.

reinforcement-learning

Efficient Model-agnostic Alignment via Bayesian Persuasion

no code implementations29 May 2024 Fengshuo Bai, Mingzhi Wang, Zhaowei Zhang, Boyuan Chen, Yinda Xu, Ying Wen, Yaodong Yang

This paper explores an efficient method for aligning black-box large models using smaller models, introducing a model-agnostic and lightweight Bayesian Persuasion Alignment framework.

Code Generation Mathematical Reasoning

Incentive Compatibility for AI Alignment in Sociotechnical Systems: Positions and Prospects

no code implementations20 Feb 2024 Zhaowei Zhang, Fengshuo Bai, Mingzhi Wang, Haoyang Ye, Chengdong Ma, Yaodong Yang

The burgeoning integration of artificial intelligence (AI) into human society brings forth significant implications for societal governance and safety.

Measuring Value Understanding in Language Models through Discriminator-Critique Gap

no code implementations30 Sep 2023 Zhaowei Zhang, Fengshuo Bai, Jun Gao, Yaodong Yang

We argue that truly understanding values in LLMs requires considering both "know what" and "know why".

Zero-shot Preference Learning for Offline RL via Optimal Transport

no code implementations6 Jun 2023 Runze Liu, Yali Du, Fengshuo Bai, Jiafei Lyu, Xiu Li

In this paper, we propose a novel zero-shot preference-based RL algorithm that leverages labeled preference data from source tasks to infer labels for target tasks, eliminating the requirement for human queries.

Offline RL

Cannot find the paper you are looking for? You can Submit a new open access paper.