Co-evolving recurrent neurons learn deep memory POMDPs

Faustino J. Gomez, Jürgen Schmidhuber

Research output: Chapter in Book/Report/Conference proceedingConference contribution

47 Scopus citations


Recurrent neural networks are theoretically capable of learning complex temporal sequences, but training them through gradient-descent is too slow and unstable for practical use in reinforcement learning environments. Neuroevolution, the evolution of artificial neural networks using genetic algorithms, can potentially solve real-world reinforcement learning tasks that require deep use of memory, i.e. memory spanning hundreds or thousands of inputs, by searching the space of recurrent neural networks directly. In this paper, we introduce a new neuroevolution algorithm called Hierarchical Enforced SubPopulations that simultaneously evolves networks at two levels of granularity: full networks and network components or neurons. We demonstrate the method in two POMDP tasks that involve temporal dependencies of up to thousands of time-steps, and show that it is faster and simpler than the current best conventional reinforcement learning system on these tasks. Copyright 2005 ACM.
Original languageEnglish (US)
Title of host publicationGECCO 2005 - Genetic and Evolutionary Computation Conference
Number of pages8
StatePublished - Dec 1 2005
Externally publishedYes

Bibliographical note

Generated from Scopus record by KAUST IRTS on 2022-09-14


Dive into the research topics of 'Co-evolving recurrent neurons learn deep memory POMDPs'. Together they form a unique fingerprint.

Cite this