Aspiration learning in coordination games

Georgios C. Chasparis, Jeff S. Shamma, Ari Arapostathis

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

10 Scopus citations


We consider the problem of distributed convergence to efficient outcomes in coordination games through payoff-based learning dynamics, namely aspiration learning. The proposed learning scheme assumes that players reinforce well performed actions, by successively playing these actions, otherwise they randomize among alternative actions. Our first contribution is the characterization of the asymptotic behavior of the induced Markov chain of the iterated process by an equivalent finite-stateMarkov chain, which simplifies previously introduced analysis on aspiration learning. We then characterize explicitly the behavior of the proposed aspiration learning in a generalized version of so-called coordination games, an example of which is network formation games. In particular, we show that in coordination games the expected percentage of time that the efficient action profile is played can become arbitrarily large.

Original languageEnglish (US)
Title of host publication2010 49th IEEE Conference on Decision and Control, CDC 2010
PublisherInstitute of Electrical and Electronics Engineers Inc.
Number of pages6
ISBN (Print)9781424477456
StatePublished - 2010
Externally publishedYes
Event49th IEEE Conference on Decision and Control, CDC 2010 - Atlanta, United States
Duration: Dec 15 2010Dec 17 2010

Publication series

NameProceedings of the IEEE Conference on Decision and Control
ISSN (Print)0743-1546
ISSN (Electronic)2576-2370


Conference49th IEEE Conference on Decision and Control, CDC 2010
Country/TerritoryUnited States

ASJC Scopus subject areas

  • Control and Systems Engineering
  • Modeling and Simulation
  • Control and Optimization


Dive into the research topics of 'Aspiration learning in coordination games'. Together they form a unique fingerprint.

Cite this