Relational Forward Models for Multi-Agent Learning
Andrea Tacchetti
and
H. Francis Song
and
Pedro A. M. Mediano
and
Vinicius Zambaldi
and
Neil C. Rabinowitz
and
Thore Graepel
and
Matthew Botvinick
and
Peter W. Battaglia
arXiv e-Print archive - 2018 via Local arXiv
Keywords:
cs.LG, cs.AI, cs.MA, stat.ML
First published: 2018/09/28 (6 years ago) Abstract: The behavioral dynamics of multi-agent systems have a rich and orderly
structure, which can be leveraged to understand these systems, and to improve
how artificial agents learn to operate in them. Here we introduce Relational
Forward Models (RFM) for multi-agent learning, networks that can learn to make
accurate predictions of agents' future behavior in multi-agent environments.
Because these models operate on the discrete entities and relations present in
the environment, they produce interpretable intermediate representations which
offer insights into what drives agents' behavior, and what events mediate the
intensity and valence of social interactions. Furthermore, we show that
embedding RFM modules inside agents results in faster learning systems compared
to non-augmented baselines. As more and more of the autonomous systems we
develop and interact with become multi-agent in nature, developing richer
analysis tools for characterizing how and why agents make decisions is
increasingly necessary. Moreover, developing artificial agents that quickly and
safely learn to coordinate with one another, and with humans in shared
environments, is crucial.