AutoIncSFA and vision-based developmental learning for humanoid robots

Varun Raj Kompella, Leo Pape, Jonathan Masci, Mikhail Frank, Jürgen Schmidhuber

Research output: Chapter in Book/Report/Conference proceedingConference contribution

5 Scopus citations


Humanoids have to deal with novel, unsupervised high-dimensional visual input streams. Our new method AutoIncSFA learns to compactly represent such complex sensory input sequences by very few meaningful features corresponding to high-level spatio-temporal abstractions, such as: a person is approaching me, or: an object was toppled. We explain the advantages of AutoIncSFA over previous related methods, and show that the compact codes greatly facilitate the task of a reinforcement learner driving the humanoid to actively explore its world like a playing baby, maximizing intrinsic curiosity reward signals for reaching states corresponding to previously unpredicted AutoIncSFA features. © 2011 IEEE.
Original languageEnglish (US)
Title of host publicationIEEE-RAS International Conference on Humanoid Robots
Number of pages8
StatePublished - Dec 1 2011
Externally publishedYes

Bibliographical note

Generated from Scopus record by KAUST IRTS on 2022-09-14


Dive into the research topics of 'AutoIncSFA and vision-based developmental learning for humanoid robots'. Together they form a unique fingerprint.

Cite this