Episodic Curiosity through Reachability
Nikolay Savinov
and
Anton Raichuk
and
Raphaël Marinier
and
Damien Vincent
and
Marc Pollefeys
and
Timothy Lillicrap
and
Sylvain Gelly
arXiv e-Print archive - 2018 via Local arXiv
Keywords:
cs.LG, cs.AI, cs.CV, cs.RO, stat.ML
First published: 2018/10/04 (6 years ago) Abstract: Rewards are sparse in the real world and most today's reinforcement learning
algorithms struggle with such sparsity. One solution to this problem is to
allow the agent to create rewards for itself - thus making rewards dense and
more suitable for learning. In particular, inspired by curious behaviour in
animals, observing something novel could be rewarded with a bonus. Such bonus
is summed up with the real task reward - making it possible for RL algorithms
to learn from the combined reward. We propose a new curiosity method which uses
episodic memory to form the novelty bonus. To determine the bonus, the current
observation is compared with the observations in memory. Crucially, the
comparison is done based on how many environment steps it takes to reach the
current observation from those in memory - which incorporates rich information
about environment dynamics. This allows us to overcome the known "couch-potato"
issues of prior work - when the agent finds a way to instantly gratify itself
by exploiting actions which lead to unpredictable consequences. We test our
approach in visually rich 3D environments in ViZDoom and DMLab. In ViZDoom, our
agent learns to successfully navigate to a distant goal at least 2 times faster
than the state-of-the-art curiosity method ICM. In DMLab, our agent generalizes
well to new procedurally generated levels of the game - reaching the goal at
least 2 times more frequently than ICM on test mazes with very sparse reward.