TY - GEN
T1 - KL Divergence Regularized Learning Model for Multi-Agent Decision Making
AU - Park, Shinkyu
AU - Leonard, Naomi Ehrich
N1 - Generated from Scopus record by KAUST IRTS on 2022-09-13
PY - 2021/5/25
Y1 - 2021/5/25
N2 - This paper investigates a multi-agent decision making model in large population games. We consider a population of agents that select strategies of interaction with one another. Agents repeatedly revise their strategy choices using revisions defined by the decision-making model. We examine the scenario in which the agents' strategy revision is subject to time delay. This is specified in the problem formulation by requiring the decision-making model to depend on delayed information of the agents' strategy choices. The main goal of this work is to find a multi-agent decision-making model under which the agents' strategy revision converges to equilibrium states, which in our population game formalism coincide with the Nash equilibrium set of underlying games. As key contributions, we propose a new decision-making model called the Kullback-Leibler (KL) divergence regularized learning model, and we establish stability of the Nash equilibrium set under the new model. Using a numerical example and simulations, we illustrate strong convergence properties of our new model.
AB - This paper investigates a multi-agent decision making model in large population games. We consider a population of agents that select strategies of interaction with one another. Agents repeatedly revise their strategy choices using revisions defined by the decision-making model. We examine the scenario in which the agents' strategy revision is subject to time delay. This is specified in the problem formulation by requiring the decision-making model to depend on delayed information of the agents' strategy choices. The main goal of this work is to find a multi-agent decision-making model under which the agents' strategy revision converges to equilibrium states, which in our population game formalism coincide with the Nash equilibrium set of underlying games. As key contributions, we propose a new decision-making model called the Kullback-Leibler (KL) divergence regularized learning model, and we establish stability of the Nash equilibrium set under the new model. Using a numerical example and simulations, we illustrate strong convergence properties of our new model.
UR - https://ieeexplore.ieee.org/document/9483414/
UR - http://www.scopus.com/inward/record.url?scp=85111906844&partnerID=8YFLogxK
U2 - 10.23919/ACC50511.2021.9483414
DO - 10.23919/ACC50511.2021.9483414
M3 - Conference contribution
SN - 9781665441971
SP - 4509
EP - 4514
BT - Proceedings of the American Control Conference
PB - Institute of Electrical and Electronics Engineers Inc.
ER -