Nonparametric Gaussian Mixture Models for the Multi-Armed Bandit

8 Aug 2018  ·  Iñigo Urteaga, Chris H. Wiggins ·

We here adopt Bayesian nonparametric mixture models to extend multi-armed bandits in general, and Thompson sampling in particular, to scenarios where there is reward model uncertainty. In the stochastic multi-armed bandit, the reward for the played arm is generated from an unknown distribution. Reward uncertainty, i.e., the lack of knowledge about the reward-generating distribution, induces the exploration-exploitation trade-off: a bandit agent needs to simultaneously learn the properties of the reward distribution and sequentially decide which action to take next. In this work, we extend Thompson sampling to scenarios where there is reward model uncertainty by adopting Bayesian nonparametric Gaussian mixture models for flexible reward density estimation. The proposed Bayesian nonparametric mixture model Thompson sampling sequentially learns the reward model that best approximates the true, yet unknown, per-arm reward distribution, achieving successful regret performance. We derive, based on a novel posterior convergence based analysis, an asymptotic regret bound for the proposed method. In addition, we empirically evaluate its performance in diverse and previously elusive bandit environments, e.g., with rewards not in the exponential family, subject to outliers, and with different per-arm reward distributions. We show that the proposed Bayesian nonparametric Thompson sampling outperforms, both in averaged cumulative regret and in regret volatility, state-of-the-art alternatives. The proposed method is valuable in the presence of bandit reward model uncertainty, as it avoids stringent case-by-case model design choices, yet provides important regret savings.

PDF Abstract

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here