Improving robot vision models for object detection through interaction

Jurgen Leitner, Alexander Forster, Jurgen Schmidhuber

Research output: Chapter in Book/Report/Conference proceedingConference contribution

6 Scopus citations

Abstract

We propose a method for learning specific object representations that can be applied (and reused) in visual detection and identification tasks. A machine learning technique called Cartesian Genetic Programming (CGP) is used to create these models based on a series of images. Our research investigates how manipulation actions might allow for the development of better visual models and therefore better robot vision. This paper describes how visual object representations can be learned and improved by performing object manipulation actions, such as, poke, push and pick-up with a humanoid robot. The improvement can be measured and allows for the robot to select and perform the right action, i.e. the action with the best possible improvement of the detector.
Original languageEnglish (US)
Title of host publicationProceedings of the International Joint Conference on Neural Networks
PublisherInstitute of Electrical and Electronics Engineers Inc.
Pages3355-3362
Number of pages8
ISBN (Print)9781479914845
DOIs
StatePublished - Sep 3 2014
Externally publishedYes

Bibliographical note

Generated from Scopus record by KAUST IRTS on 2022-09-14

Fingerprint

Dive into the research topics of 'Improving robot vision models for object detection through interaction'. Together they form a unique fingerprint.

Cite this