Learning with Delayed Payoffs in Population Games using Kullback-Leibler Divergence Regularization

13 Jun 2023  ·  Shinkyu Park, Naomi Ehrich Leonard ·

We study a multi-agent decision problem in large population games. Agents across multiple populations select strategies for repeated interactions with one another. At each stage of the interactions, agents use their decision-making model to revise their strategy selections based on payoffs determined by an underlying game. Their goal is to learn the strategies of the Nash equilibrium of the game. However, when games are subject to time delays, conventional decision-making models from the population game literature result in oscillation in the strategy revision process or convergence to an equilibrium other than the Nash. To address this problem, we propose the Kullback-Leibler Divergence Regularized Learning (KLD-RL) model and an algorithm to iteratively update the model's regularization parameter. Using passivity-based convergence analysis techniques, we show that the KLD-RL model achieves convergence to the Nash equilibrium, without oscillation, for a class of population games that are subject to time delays. We demonstrate our main results numerically on a two-population congestion game and a two-population zero-sum game.

PDF Abstract
No code implementations yet. Submit your code now

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here