This paper describes quasi-online reinforcement learning: while a robot is exploring its environment, in the back-ground a probabilistic model of the environment is built on the fly as new experiences arrive; the policy is trained concurrently based on this model using an anytime algorithm. Prioritized sweeping, directed exploration, and transformed reward functions provide additional speed-ups. The robot quickly learns goal-directed policies from scratch, requiring few interactions with the environment and making efficient use of available computation time. From an outside perspective it learns the behavior online and in real time. We describe comparisons with standard methods and show the individual utility of each of the proposed techniques. © 2006 IEEE.
|Original language||English (US)|
|Title of host publication||Proceedings - IEEE International Conference on Robotics and Automation|
|Number of pages||6|
|State||Published - Dec 27 2006|