Metric state space reinforcement learning for a vision-capable mobile robot

Viktor Zhumatiy, Faustino Gomez, Marcus Hutter, Jürgen Schmidhuber

Research output: Chapter in Book/Report/Conference proceedingConference contribution

4 Scopus citations

Abstract

We address the problem of autonomously learning controllers for vision-capable mobile robots. We extend McCallum's (1995) Nearest-Sequence Memory algorithm to allow for general metrics over state-action trajectories. We demonstrate the feasibility of our approach by successfully running our algorithm on a real mobile robot. The algorithm is novel and unique in that it (a) explores the environment and learns directly on a mobile robot without using a hand-made computer model as an intermediate step, (b) does not require manual discretization of the sensor input space, (c) works in piecewise continuous perceptual spaces, and (d) copes with partial observability. Together this allows learning from much less experience compared to previous methods. © 2006 The authors.
Original languageEnglish (US)
Title of host publicationIntelligent Autonomous Systems 9, IAS 2006
Pages272-281
Number of pages10
StatePublished - Dec 1 2006
Externally publishedYes

Bibliographical note

Generated from Scopus record by KAUST IRTS on 2022-09-14

Fingerprint

Dive into the research topics of 'Metric state space reinforcement learning for a vision-capable mobile robot'. Together they form a unique fingerprint.

Cite this