LSTM recurrent networks learn simple context-free and context-sensitive languages

F. A. Gers, J. Schmidhuber

Research output: Contribution to journalArticlepeer-review

553 Scopus citations

Abstract

Previous work on learning regular languages from exemplary training sequences showed that long short-term memory (LSTM) outperforms traditional recurrent neural networks (RNNs). Here we demonstrate LSTMs superior performance on context-free language (CFL) benchmarks for RNNs, and show that it works even better than previous hardwired or highly specialized architectures. To the best of our knowledge, LSTM variants are also the first RNNs to learn a simple context-sensitive language (CSL), namely anbncn.
Original languageEnglish (US)
Pages (from-to)1333-1340
Number of pages8
JournalIEEE Transactions on Neural Networks
Volume12
Issue number6
DOIs
StatePublished - Nov 1 2001
Externally publishedYes

Bibliographical note

Generated from Scopus record by KAUST IRTS on 2022-09-14

ASJC Scopus subject areas

  • Artificial Intelligence
  • Software
  • Computer Networks and Communications
  • Computer Science Applications

Fingerprint

Dive into the research topics of 'LSTM recurrent networks learn simple context-free and context-sensitive languages'. Together they form a unique fingerprint.

Cite this