Large-Scale Study of Curiosity-Driven Learning
Yuri Burda
and
Harri Edwards
and
Deepak Pathak
and
Amos Storkey
and
Trevor Darrell
and
Alexei A. Efros
arXiv e-Print archive - 2018 via Local arXiv
Keywords:
cs.LG, cs.AI, cs.CV, cs.RO, stat.ML
First published: 2018/08/13 (6 years ago) Abstract: Reinforcement learning algorithms rely on carefully engineering environment
rewards that are extrinsic to the agent. However, annotating each environment
with hand-designed, dense rewards is not scalable, motivating the need for
developing reward functions that are intrinsic to the agent. Curiosity is a
type of intrinsic reward function which uses prediction error as reward signal.
In this paper: (a) We perform the first large-scale study of purely
curiosity-driven learning, i.e. without any extrinsic rewards, across 54
standard benchmark environments, including the Atari game suite. Our results
show surprisingly good performance, and a high degree of alignment between the
intrinsic curiosity objective and the hand-designed extrinsic rewards of many
game environments. (b) We investigate the effect of using different feature
spaces for computing prediction error and show that random features are
sufficient for many popular RL game benchmarks, but learned features appear to
generalize better (e.g. to novel game levels in Super Mario Bros.). (c) We
demonstrate limitations of the prediction-based rewards in stochastic setups.
Game-play videos and code are at
https://pathak22.github.io/large-scale-curiosity/