Aspiration learning in coordination games

Georgios C. Chasparis*, Ari Arapostathis, Jeff S. Shamma

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

10 Scopus citations


We consider the problem of distributed convergence to efficient outcomes in coordination games through dynamics based on aspiration learning. Our first contribution is the characterization of the asymptotic behavior of the induced Markov chain of the iterated process in terms of an equivalent finite-state Markov chain. We then characterize explicitly the behavior of the proposed aspiration learning in a generalized version of coordination games, examples of which include network formation and common-pool games. In particular, we show that in generic coordination games the frequency at which an efficient action profile is played can be made arbitrarily large. Although convergence to efficient outcomes is desirable, in several coordination games, such as common-pool games, attainability of fair outcomes, i.e., sequences of plays at which players experience highly rewarding returns with the same frequency, might also be of special interest. To this end, we demonstrate through analysis and simulations that aspiration learning also establishes fair outcomes in all symmetric coordination games, including common-pool games.

Original languageEnglish (US)
Pages (from-to)465-490
Number of pages26
JournalSIAM Journal on Control and Optimization
Issue number1
StatePublished - 2013
Externally publishedYes


  • Aspiration learning
  • Coordination games
  • Game theory

ASJC Scopus subject areas

  • Control and Optimization
  • Applied Mathematics


Dive into the research topics of 'Aspiration learning in coordination games'. Together they form a unique fingerprint.

Cite this