Robotic Tracking Control with Kernel Trick-based Reinforcement Learning

In recent years, reinforcement learning has been developed dramatically and is widely used to solve control problems, e.g., playing games. However, there are still some problems for reinforcement learning to perform robotic control tasks. Fortunately, the kernel trick-based methods provide a chance to deal with those challenges. This work aims at developing a kernel trick-based learning control method to carry out robotic tracking control tasks. A reward system, in this work, is presented in order to speed up the learning processes. And then, a kernel trick-based reinforcement learning tracking controller is presented to perform tracking control tasks on a robotic manipulator system. To evaluate the policy and assist the reward system to accelerate the speed of finding the optimal control policy, a critic system is introduced. Finally, from the comparison with the benchmark, the simulation results illustrate that our algorithm has faster convergence rate and can execute tracking control tasks effectively, the reward function and the critic system proposed in this work is efficient.

PDF

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here