IMPALA: Scalable Distributed Deep-RL with Importance Weighted Actor-Learner Architectures
Lasse Espeholt
and
Hubert Soyer
and
Remi Munos
and
Karen Simonyan
and
Volodymir Mnih
and
Tom Ward
and
Yotam Doron
and
Vlad Firoiu
and
Tim Harley
and
Iain Dunning
and
Shane Legg
and
Koray Kavukcuoglu
arXiv e-Print archive - 2018 via Local arXiv
Keywords:
cs.LG, cs.AI
First published: 2018/02/05 (6 years ago) Abstract: In this work we aim to solve a large collection of tasks using a single
reinforcement learning agent with a single set of parameters. A key challenge
is to handle the increased amount of data and extended training time. We have
developed a new distributed agent IMPALA (Importance Weighted Actor-Learner
Architecture) that not only uses resources more efficiently in single-machine
training but also scales to thousands of machines without sacrificing data
efficiency or resource utilisation. We achieve stable learning at high
throughput by combining decoupled acting and learning with a novel off-policy
correction method called V-trace. We demonstrate the effectiveness of IMPALA
for multi-task reinforcement learning on DMLab-30 (a set of 30 tasks from the
DeepMind Lab environment (Beattie et al., 2016)) and Atari-57 (all available
Atari games in Arcade Learning Environment (Bellemare et al., 2013a)). Our
results show that IMPALA is able to achieve better performance than previous
agents with less data, and crucially exhibits positive transfer between tasks
as a result of its multi-task approach.