KL Divergence Regularized Learning Model for Multi-Agent Decision Making

Shinkyu Park, Naomi Ehrich Leonard

Research output: Chapter in Book/Report/Conference proceedingConference contribution

5 Scopus citations

Abstract

This paper investigates a multi-agent decision making model in large population games. We consider a population of agents that select strategies of interaction with one another. Agents repeatedly revise their strategy choices using revisions defined by the decision-making model. We examine the scenario in which the agents' strategy revision is subject to time delay. This is specified in the problem formulation by requiring the decision-making model to depend on delayed information of the agents' strategy choices. The main goal of this work is to find a multi-agent decision-making model under which the agents' strategy revision converges to equilibrium states, which in our population game formalism coincide with the Nash equilibrium set of underlying games. As key contributions, we propose a new decision-making model called the Kullback-Leibler (KL) divergence regularized learning model, and we establish stability of the Nash equilibrium set under the new model. Using a numerical example and simulations, we illustrate strong convergence properties of our new model.
Original languageEnglish (US)
Title of host publicationProceedings of the American Control Conference
PublisherInstitute of Electrical and Electronics Engineers Inc.
Pages4509-4514
Number of pages6
ISBN (Print)9781665441971
DOIs
StatePublished - May 25 2021
Externally publishedYes

Bibliographical note

Generated from Scopus record by KAUST IRTS on 2022-09-13

Fingerprint

Dive into the research topics of 'KL Divergence Regularized Learning Model for Multi-Agent Decision Making'. Together they form a unique fingerprint.

Cite this