Abstract
An attempt is made to determine how a system can learn to reduce the descriptions of event sequences without losing information. It is shown that the learning system ought to concentrate on unexpected inputs and ignore expected ones. This insight leads to the construction of neural systems which learn to 'divide and conquer' by recursively composing sequences. The first system creates a self-organizing multilevel hierarchy of recurrent predictors. The second system involves only two recurrent networks: it tries to collapse a multilevel predictor hierarchy into a single recurrent net. Experiments show that the system can require less computation per time step and much fewer training sequences than the conventional training algorithms for recurrent nets.
Original language | English (US) |
---|---|
Title of host publication | 1991 IEEE International Joint Conference on Neural Networks - IJCNN '91 |
Publisher | Publ by IEEEPiscataway |
Pages | 1130-1135 |
Number of pages | 6 |
ISBN (Print) | 0780302273 |
DOIs | |
State | Published - Jan 1 1991 |
Externally published | Yes |