Quasi-online reinforcement learning for robots

Bram Bakker, Viktor Zhumatiy, Gabriel Gruener, Jürgen Schmidhuber

Research output: Chapter in Book/Report/Conference proceedingConference contribution

31 Scopus citations

Abstract

This paper describes quasi-online reinforcement learning: while a robot is exploring its environment, in the back-ground a probabilistic model of the environment is built on the fly as new experiences arrive; the policy is trained concurrently based on this model using an anytime algorithm. Prioritized sweeping, directed exploration, and transformed reward functions provide additional speed-ups. The robot quickly learns goal-directed policies from scratch, requiring few interactions with the environment and making efficient use of available computation time. From an outside perspective it learns the behavior online and in real time. We describe comparisons with standard methods and show the individual utility of each of the proposed techniques. © 2006 IEEE.
Original languageEnglish (US)
Title of host publicationProceedings - IEEE International Conference on Robotics and Automation
Pages2997-3002
Number of pages6
DOIs
StatePublished - Dec 27 2006
Externally publishedYes

Bibliographical note

Generated from Scopus record by KAUST IRTS on 2022-09-14

Fingerprint

Dive into the research topics of 'Quasi-online reinforcement learning for robots'. Together they form a unique fingerprint.

Cite this