SMIX(λ): Enhancing Centralized Value Functions for Cooperative Multiagent Reinforcement Learning

Xinghu Yao, Chao Wen, Yuhui Wang, Xiaoyang Tan*

*Corresponding author for this work

    Research output: Contribution to journalArticlepeer-review

    8 Scopus citations

    Abstract

    Learning a stable and generalizable centralized value function (CVF) is a crucial but challenging task in multiagent reinforcement learning (MARL), as it has to deal with the issue that the joint action space increases exponentially with the number of agents in such scenarios. This article proposes an approach, named SMIX, that uses an OFF-policy training to achieve this by avoiding the greedy assumption commonly made in CVF learning. As importance sampling for such OFF-policy training is both computationally costly and numerically unstable, we proposed to use the -return as a proxy to compute the temporal difference (TD) error. With this new loss function objective, we adopt a modified QMIX network structure as the base to train our model. By further connecting it with the Q approach from a unified expectation correction viewpoint, we show that the proposed SMIXis equivalent to Q and hence shares its convergence properties, while without being suffered from the aforementioned curse of dimensionality problem inherent in MARL. Experiments on the StarCraft Multiagent Challenge (SMAC) benchmark demonstrate that our approach not only outperforms several state-of-the-art MARL methods by a large margin but also can be used as a general tool to improve the overall performance of other centralized training with decentralized execution (CTDE)-type algorithms by enhancing their CVFs.

    Original languageEnglish (US)
    Pages (from-to)52-63
    Number of pages12
    JournalIEEE Transactions on Neural Networks and Learning Systems
    Volume34
    Issue number1
    DOIs
    StatePublished - Jan 1 2023

    Bibliographical note

    Funding Information:
    This work was supported in part by the National Science Foundation of China under Grant 61976115 and Grant 61732006, in part by the AI+ Project of the Nanjing University of Aeronautics and Astronautics (NUAA) under Grant XZA20005 and Grant 56XZA18009, in part by the Research Project under Grant 315025305, and in part by the Graduate Innovation Foundation of NUAA under Grant Kfjj20191608.

    Publisher Copyright:
    © 2012 IEEE.

    Keywords

    • Deep reinforcement learning (DRL)
    • multiagent reinforcement learning (MARL)
    • multiagent systems
    • StarCraft Multiagent Challenge (SMAC)

    ASJC Scopus subject areas

    • Software
    • Computer Science Applications
    • Computer Networks and Communications
    • Artificial Intelligence

    Fingerprint

    Dive into the research topics of 'SMIX(λ): Enhancing Centralized Value Functions for Cooperative Multiagent Reinforcement Learning'. Together they form a unique fingerprint.

    Cite this