Abstract
The overestimation in Deep Deterministic Policy Gradients (DDPG) caused by value approximation error may result in unstable policy training. Twin Delayed Deep Deterministic Policy Gradient (TD3) addresses the overestimation but suffers from the underestimation. In this paper, we propose a Co-Regularization based Deep Deterministic (CoD2) policy gradient method to mitigate the estimation bias. Two learners characterized by overestimated and underestimated biases are trained with Co-regularization to achieve this goal. The overestimated and underestimated values are updated conservatively in CoD2 for policy evaluation. Experimental results show that our method achieves comparable performance compared with other methods.
Original language | English (US) |
---|---|
Article number | 108872 |
Journal | Pattern Recognition |
Volume | 131 |
DOIs | |
State | Published - Nov 2022 |
Bibliographical note
Funding Information:This work is partially supported by National Key R&D program of China (2021ZD0113203), National Science Foundation of China ( 61976115 , 61732006 ), AI+ Project of NUAA (NZ2020012, 56XZA18009), and research project (50912040302).
Publisher Copyright:
© 2022 Elsevier Ltd
Keywords
- Co-training
- Deterministic policy gradient
- Overestimation
- Reinforcement learning
- Underestimation
ASJC Scopus subject areas
- Software
- Signal Processing
- Computer Vision and Pattern Recognition
- Artificial Intelligence