Exploring Parameter Space in Reinforcement Learning

Thomas Rückstieb, Frank Sehnke, Tom Schaul, Daan Wierstra, Yi Sun, Jürgen Schmidhuber

Research output: Contribution to journalArticlepeer-review

53 Scopus citations

Abstract

This paper discusses parameter-based exploration methods for reinforcement learning. Parameter-based methods perturb parameters of a general function approximator directly, rather than adding noise to the resulting actions. Parameter-based exploration unifies reinforcement learning and black-box optimization, and has several advantages over action perturbation. We review two recent parameter-exploring algorithms: Natural Evolution Strategies and Policy Gradients with Parameter-Based Exploration. Both outperform state-of-the-art algorithms in several complex high-dimensional tasks commonly found in robot control. Furthermore, we describe how a novel exploration method, State-Dependent Exploration, can modify existing algorithms to mimic exploration in parameter space.

Original languageEnglish (US)
Pages (from-to)14-24
Number of pages11
JournalPaladyn
Volume1
Issue number1
DOIs
StatePublished - Mar 1 2010

Bibliographical note

Publisher Copyright:
© Thomas Rückstieß et al. 2010.

Keywords

  • exploration
  • optimization
  • policy gradients
  • reinforcement learning

ASJC Scopus subject areas

  • Human-Computer Interaction
  • Developmental Neuroscience
  • Cognitive Neuroscience
  • Artificial Intelligence
  • Behavioral Neuroscience

Fingerprint

Dive into the research topics of 'Exploring Parameter Space in Reinforcement Learning'. Together they form a unique fingerprint.

Cite this