Adaptive history compression for learning to divide and conquer

Research output: Chapter in Book/Report/Conference proceedingConference contribution

4 Scopus citations

Abstract

An attempt is made to determine how a system can learn to reduce the descriptions of event sequences without losing information. It is shown that the learning system ought to concentrate on unexpected inputs and ignore expected ones. This insight leads to the construction of neural systems which learn to 'divide and conquer' by recursively composing sequences. The first system creates a self-organizing multilevel hierarchy of recurrent predictors. The second system involves only two recurrent networks: it tries to collapse a multilevel predictor hierarchy into a single recurrent net. Experiments show that the system can require less computation per time step and much fewer training sequences than the conventional training algorithms for recurrent nets.
Original languageEnglish (US)
Title of host publication1991 IEEE International Joint Conference on Neural Networks - IJCNN '91
PublisherPubl by IEEEPiscataway
Pages1130-1135
Number of pages6
ISBN (Print)0780302273
DOIs
StatePublished - Jan 1 1991
Externally publishedYes

Bibliographical note

Generated from Scopus record by KAUST IRTS on 2022-09-14

Fingerprint

Dive into the research topics of 'Adaptive history compression for learning to divide and conquer'. Together they form a unique fingerprint.

Cite this