Abstract
We consider the problem of distributed convergence to efficient outcomes in coordination games through dynamics based on aspiration learning. Our first contribution is the characterization of the asymptotic behavior of the induced Markov chain of the iterated process in terms of an equivalent finite-state Markov chain. We then characterize explicitly the behavior of the proposed aspiration learning in a generalized version of coordination games, examples of which include network formation and common-pool games. In particular, we show that in generic coordination games the frequency at which an efficient action profile is played can be made arbitrarily large. Although convergence to efficient outcomes is desirable, in several coordination games, such as common-pool games, attainability of fair outcomes, i.e., sequences of plays at which players experience highly rewarding returns with the same frequency, might also be of special interest. To this end, we demonstrate through analysis and simulations that aspiration learning also establishes fair outcomes in all symmetric coordination games, including common-pool games.
Original language | English (US) |
---|---|
Pages (from-to) | 465-490 |
Number of pages | 26 |
Journal | SIAM Journal on Control and Optimization |
Volume | 51 |
Issue number | 1 |
DOIs | |
State | Published - 2013 |
Externally published | Yes |
Keywords
- Aspiration learning
- Coordination games
- Game theory
ASJC Scopus subject areas
- Control and Optimization
- Applied Mathematics