Multi-task Deep Reinforcement Learning with PopArt
Matteo Hessel
and
Hubert Soyer
and
Lasse Espeholt
and
Wojciech Czarnecki
and
Simon Schmitt
and
Hado van Hasselt
arXiv e-Print archive - 2018 via Local arXiv
Keywords:
cs.LG, stat.ML
First published: 2018/09/12 (6 years ago) Abstract: The reinforcement learning community has made great strides in designing
algorithms capable of exceeding human performance on specific tasks. These
algorithms are mostly trained one task at the time, each new task requiring to
train a brand new agent instance. This means the learning algorithm is general,
but each solution is not; each agent can only solve the one task it was trained
on. In this work, we study the problem of learning to master not one but
multiple sequential-decision tasks at once. A general issue in multi-task
learning is that a balance must be found between the needs of multiple tasks
competing for the limited resources of a single learning system. Many learning
algorithms can get distracted by certain tasks in the set of tasks to solve.
Such tasks appear more salient to the learning process, for instance because of
the density or magnitude of the in-task rewards. This causes the algorithm to
focus on those salient tasks at the expense of generality. We propose to
automatically adapt the contribution of each task to the agent's updates, so
that all tasks have a similar impact on the learning dynamics. This resulted in
state of the art performance on learning to play all games in a set of 57
diverse Atari games. Excitingly, our method learned a single trained policy -
with a single set of weights - that exceeds median human performance. To our
knowledge, this was the first time a single agent surpassed human-level
performance on this multi-task domain. The same approach also demonstrated
state of the art performance on a set of 30 tasks in the 3D reinforcement
learning platform DeepMind Lab.