Alleviating the estimation bias of deep deterministic policy gradient via co-regularization

Yao Li, Yu Hui Wang, Yao Zhong Gan, Xiao Yang Tan*

*Corresponding author for this work

    Research output: Contribution to journalArticlepeer-review

    5 Scopus citations

    Abstract

    The overestimation in Deep Deterministic Policy Gradients (DDPG) caused by value approximation error may result in unstable policy training. Twin Delayed Deep Deterministic Policy Gradient (TD3) addresses the overestimation but suffers from the underestimation. In this paper, we propose a Co-Regularization based Deep Deterministic (CoD2) policy gradient method to mitigate the estimation bias. Two learners characterized by overestimated and underestimated biases are trained with Co-regularization to achieve this goal. The overestimated and underestimated values are updated conservatively in CoD2 for policy evaluation. Experimental results show that our method achieves comparable performance compared with other methods.

    Original languageEnglish (US)
    Article number108872
    JournalPattern Recognition
    Volume131
    DOIs
    StatePublished - Nov 2022

    Bibliographical note

    Funding Information:
    This work is partially supported by National Key R&D program of China (2021ZD0113203), National Science Foundation of China ( 61976115 , 61732006 ), AI+ Project of NUAA (NZ2020012, 56XZA18009), and research project (50912040302).

    Publisher Copyright:
    © 2022 Elsevier Ltd

    Keywords

    • Co-training
    • Deterministic policy gradient
    • Overestimation
    • Reinforcement learning
    • Underestimation

    ASJC Scopus subject areas

    • Software
    • Signal Processing
    • Computer Vision and Pattern Recognition
    • Artificial Intelligence

    Fingerprint

    Dive into the research topics of 'Alleviating the estimation bias of deep deterministic policy gradient via co-regularization'. Together they form a unique fingerprint.

    Cite this