Paper

Single-step Options for Adversary Driving

In this paper, we use reinforcement learning for safety driving in adversary settings. In our work, the knowledge in state-of-art planning methods is reused by single-step options whose action suggestions are compared in parallel with primitive actions. We show two advantages by doing so. First, training this reinforcement learning agent is easier and faster than training the primitive-action agent. Second, our new agent outperforms the primitive-action reinforcement learning agent, human testers as well as the state-of-art planning methods that our agent queries as skill options.

Results in Papers With Code
(↓ scroll down to see all results)