Dynamic Programming-based Approximate Optimal Control for Model-Based Reinforcement Learning

22 Dec 2023  ·  Prakash Mallick, Zhiyong Chen ·

This article proposes an improved trajectory optimization approach for stochastic optimal control of dynamical systems affected by measurement noise by combining optimal control with maximum likelihood techniques to improve the reduction of the cumulative cost-to-go. A modified optimization objective function that incorporates dynamic programming-based controller design is presented to handle the noise in the system and sensors. Empirical results demonstrate the effectiveness of the approach in reducing stochasticity and allowing for an intermediate step to switch optimization that can allow an efficient balance of exploration and exploitation mechanism for complex tasks by constraining policy parameters to parameters obtained as a result of this improved optimization. This research study also includes theoretical work on the uniqueness of control parameter estimates and also leverages a structure of the likelihood function which has an established theoretical guarantees. Furthermore, a theoretical result is also explored that bridge the gap between the proposed optimization objective function and existing information theory (relative entropy) and optimal control dualities.

PDF Abstract

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here