Evolving large-scale neural networks for vision-based reinforcement learning

Jan Koutník, Giuseppe Cuccu, Jürgen Schmidhuber, Faustino Gomez

Research output: Chapter in Book/Report/Conference proceedingConference contribution

90 Scopus citations


The idea of using evolutionary computation to train artificial neural networks, or neuroevolution (NE), for reinforcement learning (RL) tasks has now been around for over 20 years. However, as RL tasks become more challenging, the networks required become larger, so do their genomes. But, scaling NE to large nets (i.e. tens of thousands of weights) is infeasible using direct encodings that map genes one-to-one to network components. In this paper, we scale-up our "compressed" network encoding where network weight matrices are represented indirectly as a set of Fourier-type coefficients, to tasks that require very-large networks due to the high-dimensionality of their input space. The approach is demonstrated successfully on two reinforcement learning tasks in which the control networks receive visual input: (1) a vision-based version of the octopus control task requiring networks with over 3 thousand weights, and (2) a version of the TORCS driving game where networks with over 1 million weights are evolved to drive a car around a track using video images from the driver's perspective. Copyright © 2013 ACM.
Original languageEnglish (US)
Title of host publicationGECCO 2013 - Proceedings of the 2013 Genetic and Evolutionary Computation Conference
Number of pages8
StatePublished - Sep 2 2013
Externally publishedYes

Bibliographical note

Generated from Scopus record by KAUST IRTS on 2022-09-14


Dive into the research topics of 'Evolving large-scale neural networks for vision-based reinforcement learning'. Together they form a unique fingerprint.

Cite this