Learning to Ground Multi-Agent Communication with Autoencoders
Toru Lin
and
Minyoung Huh
and
Chris Stauffer
and
Ser-Nam Lim
and
Phillip Isola
arXiv e-Print archive - 2021 via Local arXiv
Keywords:
cs.LG, cs.AI, cs.CL, cs.MA
First published: 2024/12/21 (just now) Abstract: Communication requires having a common language, a lingua franca, between
agents. This language could emerge via a consensus process, but it may require
many generations of trial and error. Alternatively, the lingua franca can be
given by the environment, where agents ground their language in representations
of the observed world. We demonstrate a simple way to ground language in
learned representations, which facilitates decentralized multi-agent
communication and coordination. We find that a standard representation learning
algorithm -- autoencoding -- is sufficient for arriving at a grounded common
language. When agents broadcast these representations, they learn to understand
and respond to each other's utterances and achieve surprisingly strong task
performance across a variety of multi-agent communication environments.