This paper describes how learning tasks in partially observable mobile robot domains can be solved by combining reinforcement learning with an unsupervised learning "event extraction" mechanism, called ARAVQ. ARAVQ transforms the robot's continuous, noisy, high-dimensional sensory input stream into a compact sequence of high-level events. The resulting hierarchical control system uses an LSTM recurrent neural network as the reinforcement learning component, which learns high-level actions in response to the history of high-level events. The high-level actions select low-level behaviors which take care of real-time motor control. Illustrative experiments based on a Khepera mobile robot simulator are presented.
|Title of host publication
|IEEE International Conference on Intelligent Robots and Systems
|Number of pages
|Published - Jan 1 2002