Training recurrent networks by evolino

Jürgen Schmidhuber, Daan Wierstra, Matteo Gagliolo, Faustino Gomez

Research output: Contribution to journalArticlepeer-review

187 Scopus citations

Abstract

In recent years, gradient-based LSTM recurrent neural networks (RNNs) solved many previously RNN-unlearnable tasks. Sometimes, however, gradient information is of little use for training RNNs, due to numerous local minima. For such cases, we present a novel method: EVOlution of systems with LINear Outputs (Evolino). Evolino evolves weights to the nonlinear, hidden nodes of RNNs while computing optimal linear mappings from hidden state to output, using methods such as pseudo-inverse-based linear regression. If we instead use quadratic programming to maximize the margin, we obtain the first evolutionary recurrent support vector machines. We show that Evolino-based LSTM can solve tasks that Echo State nets (Jaeger, 2004a) cannot and achieves higher accuracy in certain continuous function generation tasks than conventional gradient descent RNNs, including gradient-based LSTM. © 2007 Massachusetts Institute of Technology.
Original languageEnglish (US)
Pages (from-to)757-779
Number of pages23
JournalNeural Computation
Volume19
Issue number3
DOIs
StatePublished - Mar 1 2007
Externally publishedYes

Bibliographical note

Generated from Scopus record by KAUST IRTS on 2022-09-14

ASJC Scopus subject areas

  • Cognitive Neuroscience

Fingerprint

Dive into the research topics of 'Training recurrent networks by evolino'. Together they form a unique fingerprint.

Cite this