Reinforcement learning in partially observable mobile robot domains using unsupervised event extraction

Bram Bakker, Fredrik Linåker, Jürgen Schmidhuber

Research output: Chapter in Book/Report/Conference proceedingConference contribution

9 Scopus citations

Abstract

This paper describes how learning tasks in partially observable mobile robot domains can be solved by combining reinforcement learning with an unsupervised learning "event extraction" mechanism, called ARAVQ. ARAVQ transforms the robot's continuous, noisy, high-dimensional sensory input stream into a compact sequence of high-level events. The resulting hierarchical control system uses an LSTM recurrent neural network as the reinforcement learning component, which learns high-level actions in response to the history of high-level events. The high-level actions select low-level behaviors which take care of real-time motor control. Illustrative experiments based on a Khepera mobile robot simulator are presented.
Original languageEnglish (US)
Title of host publicationIEEE International Conference on Intelligent Robots and Systems
Pages938-943
Number of pages6
StatePublished - Jan 1 2002
Externally publishedYes

Bibliographical note

Generated from Scopus record by KAUST IRTS on 2022-09-14

Fingerprint

Dive into the research topics of 'Reinforcement learning in partially observable mobile robot domains using unsupervised event extraction'. Together they form a unique fingerprint.

Cite this