Abstract
Log-linear learning is a learning algorithm that provides guarantees on the percentage of time that the action profile will be at a potential maximizer in potential games. The traditional analysis of log-linear learning focuses on explicitly computing the stationary distribution and hence requires a highly structured environment. Since the appeal of log-linear learning is not solely the explicit form of the stationary distribution, we seek to address to what degree one can relax the structural assumptions while maintaining that only potential function maximizers are stochastically stable. In this paper, we introduce slight variants of log-linear learning that provide the desired asymptotic guarantees while relaxing the structural assumptions to include synchronous updates, time-varying action sets, and limitations in information available to the players. The motivation for these relaxations stems from the applicability of log-linear learning to the control of multi-agent systems where these structural assumptions are unrealistic from an implementation perspective.
Original language | English (US) |
---|---|
Pages (from-to) | 788-808 |
Number of pages | 21 |
Journal | Games and Economic Behavior |
Volume | 75 |
Issue number | 2 |
DOIs | |
State | Published - Jul 2012 |
Externally published | Yes |
Keywords
- Distributed control
- Equilibrium selection
- Potential games
ASJC Scopus subject areas
- Finance
- Economics and Econometrics