Abstract
One of the most distinguishing features of cognitive systems is the ability to predict the future course of actions and the results of ongoing behaviors, and in general to plan actions well in advance. Neuroscience has started examining the neural basis of these skills with behavioral or animal studies and it is now relatively well understood that the brain builds models of the physical world through learning. These models are sometimes called 'internal models', meaning that they are the internal rehearsal (or simulation) of the world enacted by the brain. In this paper we investigate the possibility of building internal models of human behaviors with a learning machine that has access to information in principle similar to that used by the brain when learning similar tasks. In particular, we concentrate on models of reaching and grasping, and we report on an experiment in which biometric data collected from human users during grasping was used to train a support vector machine. We then assess to what degree the models built by the machine are faithful representations of the actual human behaviors. The results indicate that the machine is able to predict reasonably well human reaching and grasping, and that prior knowledge of the object to be grasped improves the performance of the machine, while keeping the same computational cost. © 2007 Taylor & Francis Group, LLC.
Original language | English (US) |
---|---|
Pages (from-to) | 1545-1564 |
Number of pages | 20 |
Journal | Advanced Robotics |
Volume | 21 |
Issue number | 13 |
DOIs | |
State | Published - Jan 1 2007 |
Externally published | Yes |