Smaller World Models for Reinforcement Learning

12 Oct 2020  ·  Jan Robine, Tobias Uelwer, Stefan Harmeling ·

Sample efficiency remains a fundamental issue of reinforcement learning. Model-based algorithms try to make better use of data by simulating the environment with a model. We propose a new neural network architecture for world models based on a vector quantized-variational autoencoder (VQ-VAE) to encode observations and a convolutional LSTM to predict the next embedding indices. A model-free PPO agent is trained purely on simulated experience from the world model. We adopt the setup introduced by Kaiser et al. (2020), which only allows 100K interactions with the real environment. We apply our method on 36 Atari environments and show that we reach comparable performance to their SimPLe algorithm, while our model is significantly smaller.

PDF Abstract

Results from the Paper


Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Atari Games Atari 2600 Bank Heist Discrete Latent Space World Model (VQ-VAE) Score 121.6 # 44
Atari Games Atari 2600 Breakout Discrete Latent Space World Model (VQ-VAE) Score 11.6 # 55
Atari Games Atari 2600 Crazy Climber Discrete Latent Space World Model (VQ-VAE) Score 59609.4 # 41
Atari Games Atari 2600 Freeway Discrete Latent Space World Model (VQ-VAE) Score 29 # 36
Atari Games Atari 2600 Pong Discrete Latent Space World Model (VQ-VAE) Score 20.2 # 27
Atari Games Atari 2600 Seaquest Discrete Latent Space World Model (VQ-VAE) Score 635 # 51

Methods