#Exploration: A Study of Count-Based Exploration for Deep Reinforcement Learning
Haoran Tang
and
Rein Houthooft
and
Davis Foote
and
Adam Stooke
and
Xi Chen
and
Yan Duan
and
John Schulman
and
Filip De Turck
and
Pieter Abbeel
arXiv e-Print archive - 2016 via Local arXiv
Keywords:
cs.AI, cs.LG
First published: 2016/11/15 (8 years ago) Abstract: Count-based exploration algorithms are known to perform near-optimally when
used in conjunction with tabular reinforcement learning (RL) methods for
solving small discrete Markov decision processes (MDPs). It is generally
thought that count-based methods cannot be applied in high-dimensional state
spaces, since most states will only occur once. Recent deep RL exploration
strategies are able to deal with high-dimensional continuous state spaces
through complex heuristics, often relying on optimism in the face of
uncertainty or intrinsic motivation. In this work, we describe a surprising
finding: a simple generalization of the classic count-based approach can reach
near state-of-the-art performance on various high-dimensional and/or continuous
deep RL benchmarks. States are mapped to hash codes, which allows to count
their occurrences with a hash table. These counts are then used to compute a
reward bonus according to the classic count-based exploration theory. We find
that simple hash functions can achieve surprisingly good results on many
challenging tasks. Furthermore, we show that a domain-dependent learned hash
code may further improve these results. Detailed analysis reveals important
aspects of a good hash function: 1) having appropriate granularity and 2)
encoding information relevant to solving the MDP. This exploration strategy
achieves near state-of-the-art performance on both continuous control tasks and
Atari 2600 games, hence providing a simple yet powerful baseline for solving
MDPs that require considerable exploration.