Abstract
For several reinforcement learning models in strategic-form games, convergence to action profiles that are not Nash equilibria may occur with positive probability under certain conditions on the payoff function. In this paper, we explore how an alternative reinforcement learning model, where the strategy of each agent is perturbed by a strategy-dependent perturbation (or mutations) function, may exclude convergence to non-Nash pure strategy profiles. This approach extends prior analysis on reinforcement learning in games that addresses the issue of convergence to saddle boundary points. It further provides a framework under which the effect of mutations can be analyzed in the context of reinforcement learning.
Original language | English (US) |
---|---|
Pages (from-to) | 667-699 |
Number of pages | 33 |
Journal | International Journal of Game Theory |
Volume | 44 |
Issue number | 3 |
DOIs | |
State | Published - Aug 31 2015 |
Bibliographical note
Publisher Copyright:© 2014, Springer-Verlag Berlin Heidelberg.
Keywords
- Learning in games
- Reinforcement learning
- Replicator dynamics
ASJC Scopus subject areas
- Economics and Econometrics
- Mathematics (miscellaneous)
- Statistics and Probability
- Social Sciences (miscellaneous)
- Statistics, Probability and Uncertainty