Revisiting log-linear learning: Asynchrony, completeness and payoff-based implementation

Jason R. Marden, Jeff S. Shamma*

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

182 Scopus citations

Abstract

Log-linear learning is a learning algorithm that provides guarantees on the percentage of time that the action profile will be at a potential maximizer in potential games. The traditional analysis of log-linear learning focuses on explicitly computing the stationary distribution and hence requires a highly structured environment. Since the appeal of log-linear learning is not solely the explicit form of the stationary distribution, we seek to address to what degree one can relax the structural assumptions while maintaining that only potential function maximizers are stochastically stable. In this paper, we introduce slight variants of log-linear learning that provide the desired asymptotic guarantees while relaxing the structural assumptions to include synchronous updates, time-varying action sets, and limitations in information available to the players. The motivation for these relaxations stems from the applicability of log-linear learning to the control of multi-agent systems where these structural assumptions are unrealistic from an implementation perspective.

Original languageEnglish (US)
Pages (from-to)788-808
Number of pages21
JournalGames and Economic Behavior
Volume75
Issue number2
DOIs
StatePublished - Jul 2012
Externally publishedYes

Keywords

  • Distributed control
  • Equilibrium selection
  • Potential games

ASJC Scopus subject areas

  • Finance
  • Economics and Econometrics

Fingerprint

Dive into the research topics of 'Revisiting log-linear learning: Asynchrony, completeness and payoff-based implementation'. Together they form a unique fingerprint.

Cite this