Abstract
Previous work on learning regular languages from exemplary training sequences showed that long short-term memory (LSTM) outperforms traditional recurrent neural networks (RNNs). Here we demonstrate LSTMs superior performance on context-free language (CFL) benchmarks for RNNs, and show that it works even better than previous hardwired or highly specialized architectures. To the best of our knowledge, LSTM variants are also the first RNNs to learn a simple context-sensitive language (CSL), namely anbncn.
Original language | English (US) |
---|---|
Pages (from-to) | 1333-1340 |
Number of pages | 8 |
Journal | IEEE Transactions on Neural Networks |
Volume | 12 |
Issue number | 6 |
DOIs | |
State | Published - Nov 1 2001 |
Externally published | Yes |
Bibliographical note
Generated from Scopus record by KAUST IRTS on 2022-09-14ASJC Scopus subject areas
- Artificial Intelligence
- Software
- Computer Networks and Communications
- Computer Science Applications