Learning with Opponent-Learning Awareness
Jakob N. Foerster
and
Richard Y. Chen
and
Maruan Al-Shedivat
and
Shimon Whiteson
and
Pieter Abbeel
and
Igor Mordatch
arXiv e-Print archive - 2017 via Local arXiv
Keywords:
cs.AI, cs.GT
First published: 2017/09/13 (7 years ago) Abstract: Multi-agent settings are quickly gathering importance in machine learning.
This includes a plethora of recent work on deep multi-agent reinforcement
learning, but also can be extended to hierarchical RL, generative adversarial
networks and decentralised optimisation. In all these settings the presence of
multiple learning agents renders the training problem non-stationary and often
leads to unstable training or undesired final results. We present Learning with
Opponent-Learning Awareness (LOLA), a method in which each agent shapes the
anticipated learning of the other agents in the environment. The LOLA learning
rule includes an additional term that accounts for the impact of one agent's
policy on the anticipated parameter update of the other agents. Preliminary
results show that the encounter of two LOLA agents leads to the emergence of
tit-for-tat and therefore cooperation in the iterated prisoners' dilemma, while
independent learning does not. In this domain, LOLA also receives higher
payouts compared to a naive learner, and is robust against exploitation by
higher order gradient-based methods. Applied to repeated matching pennies, LOLA
agents converge to the Nash equilibrium. In a round robin tournament we show
that LOLA agents can successfully shape the learning of a range of multi-agent
learning algorithms from literature, resulting in the highest average returns
on the IPD. We also show that the LOLA update rule can be efficiently
calculated using an extension of the policy gradient estimator, making the
method suitable for model-free RL. This method thus scales to large parameter
and input spaces and nonlinear function approximators. We also apply LOLA to a
grid world task with an embedded social dilemma using deep recurrent policies
and opponent modelling. Again, by explicitly considering the learning of the
other agent, LOLA agents learn to cooperate out of self-interest.
Normal RL agents in multi-agent scenarios treat their opponents as a static part of the environment, not taking into account the fact that other agents are learning as well. This paper proposes LOLA, a learning rule that should take the agency and learning of opponents into account by optimizing "return under one step look-ahead of opponent learning"
So instead of optimizing under the current parameters of agent 1 and 2
$$V^1(\theta_i^1, \theta_i^2)$$
LOLA proposes to optimize taking into account one step of opponent (agent 2) learning
$$V^1(\theta_i^1, \theta_i^2 + \Delta \theta^2_i)$$
where we assume the opponent's naive learning update $\Delta \theta^2_i = \nabla_{\theta^2} V^2(\theta^1, \theta^2) \cdot \eta$ and we add a second-order correction term
on top of this, the authors propose
- a learning rule with policy gradients in the case that the agent does not have access to exact gradients
- a way to estimate the parameters of the opponent, $\theta^2$, from its trajectories using maximum likelihood in the case you can't access them directly
$$\hat \theta^2 = \text{argmax}_{\theta^2} \sum_t \log \pi_{\theta^2}(u_t^2|s_t)$$
LOLA is tested on iterated prisoner's dilemma and converges to a tit-for-tat strategy more frequently than the naive RL learning algorithm, and outperforms it. LOLA is tested on iterated matching pennies (similar to prisoner's dilemma) and stably converges to the Nash equilibrium whereas the naive learners do not. In testing on coin game (a higher dimensional version of prisoner's dilemma) they find that naive learners generally choose the defect option whereas LOLA agents have a mostly-cooperative strategy.
As well, the authors show that LOLA is a dominant learning rule in IPD, where both agents always do better if either is using LOLA (and even better if both are using LOLA).
Finally, the authors also propose second order LOLA, which instead of assuming the opponent is a naive learner, assumes the opponent uses a LOLA learning rule. They show that second order LOLA does not lead to improved performance so there is no need to have a $n$th order LOLA arms race.