Multimodal parameter-exploring policy gradients

Frank Sehnke, Alex Graves, Christian Osendorfer, Jürgen Schmidhuber

Research output: Chapter in Book/Report/Conference proceedingConference contribution

11 Scopus citations

Abstract

Policy Gradients with Parameter-based Exploration (PGPE) is a novel model-free reinforcement learning method that alleviates the problem of high-variance gradient estimates encountered in normal policy gradient methods. It has been shown to drastically speed up convergence for several large-scale reinforcement learning tasks. However the independent normal distributions used by PGPE to search through parameter space are inadequate for some problems with multimodal reward surfaces. This paper extends the basic PGPE algorithm to use multimodal mixture distributions for each parameter, while remaining efficient. Experimental results on the Rastrigin function and the inverted pendulum benchmark demonstrate the advantages of this modification, with faster convergence to better optima. © 2010 IEEE.
Original languageEnglish (US)
Title of host publicationProceedings - 9th International Conference on Machine Learning and Applications, ICMLA 2010
Pages113-118
Number of pages6
DOIs
StatePublished - Dec 1 2010
Externally publishedYes

Bibliographical note

Generated from Scopus record by KAUST IRTS on 2022-09-14

Fingerprint

Dive into the research topics of 'Multimodal parameter-exploring policy gradients'. Together they form a unique fingerprint.

Cite this