Internal models of reaching and grasping

Claudio Castellini, Francesco Orabona, Giorgio Metta, Giulio Sandini

Research output: Contribution to journalArticlepeer-review

8 Scopus citations

Abstract

One of the most distinguishing features of cognitive systems is the ability to predict the future course of actions and the results of ongoing behaviors, and in general to plan actions well in advance. Neuroscience has started examining the neural basis of these skills with behavioral or animal studies and it is now relatively well understood that the brain builds models of the physical world through learning. These models are sometimes called 'internal models', meaning that they are the internal rehearsal (or simulation) of the world enacted by the brain. In this paper we investigate the possibility of building internal models of human behaviors with a learning machine that has access to information in principle similar to that used by the brain when learning similar tasks. In particular, we concentrate on models of reaching and grasping, and we report on an experiment in which biometric data collected from human users during grasping was used to train a support vector machine. We then assess to what degree the models built by the machine are faithful representations of the actual human behaviors. The results indicate that the machine is able to predict reasonably well human reaching and grasping, and that prior knowledge of the object to be grasped improves the performance of the machine, while keeping the same computational cost. © 2007 Taylor & Francis Group, LLC.
Original languageEnglish (US)
Pages (from-to)1545-1564
Number of pages20
JournalAdvanced Robotics
Volume21
Issue number13
DOIs
StatePublished - Jan 1 2007
Externally publishedYes

Bibliographical note

Generated from Scopus record by KAUST IRTS on 2023-09-25

Fingerprint

Dive into the research topics of 'Internal models of reaching and grasping'. Together they form a unique fingerprint.

Cite this