Exploring through Random Curiosity with General Value Functions

Aditya Ramesh, Louis Kirsch, Sjoerd van Steenkiste, Juergen Schmidhuber

Research output: Chapter in Book/Report/Conference proceedingConference contribution


Efficient exploration in reinforcement learning is a challenging problem commonly addressed through intrinsic rewards. Recent prominent approaches are based on state novelty or variants of artificial curiosity. However, directly applying them to partially observable environments can be ineffective and lead to premature dissipation of intrinsic rewards. Here we propose random curiosity with general value functions (RC-GVF), a novel intrinsic reward function that draws upon connections between these distinct approaches. Instead of using only the current observation's novelty or a curiosity bonus for failing to predict precise environment dynamics, RC-GVF derives intrinsic rewards through predicting temporally extended general value functions. We demonstrate that this improves exploration in a hard-exploration diabolical lock problem. Furthermore, RC-GVF significantly outperforms previous methods in the absence of ground-truth episodic counts in the partially observable MiniGrid environments. Panoramic observations on MiniGrid further boost RC-GVF's performance such that it is competitive to baselines exploiting privileged information in form of episodic counts.
Original languageEnglish (US)
Title of host publication36th Conference on Neural Information Processing Systems (NeurIPS 2022).
StatePublished - Nov 18 2022

Bibliographical note

KAUST Repository Item: Exported on 2022-12-21
Acknowledgements: We would like to thank Kenny Young, Francesco Faccio, Anand Gopalakrishnan, and Dylan Ashley for valuable comments. This research was supported by the ERC Advanced Grant (742870), the Swiss National Science Foundation grant (200021_192356), and by the Swiss National Supercomputing Centre (CSCS projects s1090 and s1127).


Dive into the research topics of 'Exploring through Random Curiosity with General Value Functions'. Together they form a unique fingerprint.

Cite this