Hierarchical reinforcement learning with subpolicies specializing for learned subgoals

Bram Bakker, Jürgen Schmidhuber

Research output: Chapter in Book/Report/Conference proceedingConference contribution

13 Scopus citations

Abstract

This paper describes a method for hierarchical reinforcement learning in which high-level policies automatically discover subgoals, and low-level policies learn to specialize for different subgoals. Subgoals are represented as desired abstract observations which cluster raw input data. High-level value functions cover the state space at a coarse level; low-level value functions cover only parts of the state space at a fine-grained level. An experiment shows that this method outperforms several flat reinforcement learning methods. A second experiment shows how problems of partial observability due to observation abstraction can be overcome using high-level policies with memory.
Original languageEnglish (US)
Title of host publicationProceedings of the IASTED International Conference on Neural Networks and Computational Intelligence
Pages125-130
Number of pages6
StatePublished - Dec 1 2004
Externally publishedYes

Bibliographical note

Generated from Scopus record by KAUST IRTS on 2022-09-14

Fingerprint

Dive into the research topics of 'Hierarchical reinforcement learning with subpolicies specializing for learned subgoals'. Together they form a unique fingerprint.

Cite this