Abstract
This tutorial article describes a dynamical systems framework rooted in evolutionary game principles to characterize non-cooperative strategic interactions among large populations of bounded rationality agents. It also overviews recent results that use passivity notions to characterize the stability of Nash-like equilibria. In our framework, each agent belongs to a population that prescribes to its members a strategy set and a strategy revision protocol. A so-called social state registers the proportions of agents in every population adopting each strategy and a pre-selected dynamic payoff mechanism, specified by a payoff dynamics model (PDM), determines the payoff as a causal map of the social state. According to the framework, each agent must take up a strategy at a time, which it can repeatedly revise over time based on its current strategy, and information about the payoff and social state available to it. The PDM class considered in our framework can model precisely or approximately prevalent dynamic behaviors such as inertia and delays that are inherent to learning and network effects, which cannot be captured using conventional memoryless payoff mechanisms (often referred to as population games).We organize the article in two main parts. The first introduces basic concepts prevailing in existing approaches in which a population game determines the payoff, while the second considers rather general PDM classes, of which every population game is a particular case. The latter expounds a passivity-based methodology to characterize convergence of the social state to Nash-like equilibria.
Original language | English (US) |
---|---|
Title of host publication | 2019 IEEE 58th Conference on Decision and Control (CDC) |
Publisher | IEEE |
Pages | 6584-6601 |
Number of pages | 18 |
ISBN (Print) | 9781728113982 |
DOIs | |
State | Published - 2019 |
Bibliographical note
KAUST Repository Item: Exported on 2020-10-01Acknowledgements: The authors would like to thank Semih Kara (UMD) for suggesting corrections and improvements to this article.