Evolving Large-Scale Neural Networks for Vision-Based Reinforcement Learning

Koutnik, Jan and Cuccu, Giuseppe and Schmidhuber, Juergen and Gomez, Faustino (2013) Evolving Large-Scale Neural Networks for Vision-Based Reinforcement Learning. In: Proceedings of the Genetic and Evolutionary Computation Conference (GECCO), 6-10/07/2013, Amsterdam.

[img] Text
koutnik2013gecco.pdf - Published Version

Download (3MB)


The idea of using evolutionary computation to train artificial neural networks, or neuroevolution (NE), for reinforcement learning (RL) tasks has now been around for over 20 years. However, as RL tasks become more challenging, the networks required become larger, so do their genomes. But, scaling NE to large nets (i.e. tens of thousands of weights) is infeasible using direct encodings that map genes one-to-one to network components. In this paper, we scale-up our “compressed” network encoding where network weight matrices are represented indirectly as a set of Fourier-type coefficients, to tasks that require very-large networks due to the high-dimensionality of their input space. The approach is demonstrated successfully on two reinforcement learning tasks in which the control networks receive visual input: (1) a vision-based version of the octopus control task requiring networks with over 3 thousand weights, and (2) a version of the TORCS driving game where networks with over 1 million weights are evolved to drive a car around a track using video images from the driver’s perspective.

Actions (login required)

View Item View Item