Asynchronous Methods for Deep Reinforcement Learning
Mnih, Volodymyr
and
Badia, Adrià Puigdomènech
and
Mirza, Mehdi
and
Graves, Alex
and
Lillicrap, Timothy P.
and
Harley, Tim
and
Silver, David
and
Kavukcuoglu, Koray
arXiv e-Print archive - 2016 via Local Bibsonomy
Keywords:
dblp
The main contribution of [Asynchronous Methods for Deep Reinforcement Learning](https://arxiv.org/pdf/1602.01783v1.pdf) by Mnih et al. is a ligthweight framework for reinforcement learning agents.
They propose a training procedure which utilizes asynchronous gradient decent updates from multiple agents at once. Instead of training one single agent who interacts with its environment, multiple agents are interacting with their own version of the environment simultaneously.
After a certain amount of timesteps, accumulated gradient updates from an agent are applied to a global model, e.g. a Deep Q-Network. These updates are asynchronous and lock free.
Effects of training speed and quality are analyzed for various reinforcement learning methods. No replay memory is need to decorrelate successive game states, since all agents are already exploring different game states in real time. Also, on-policy algorithms like actor-critic can be applied.
They show that asynchronous updates have a stabilizing effect on policy and value updates. Also, their best method, an asynchronous variant of actor-critic, surpasses the current state-of-the-art on the Atari domain while training for half the time on a single multi-core CPU instead of a GPU.
Most of the Q-learning methods such as DQN or Duelling network relies on Experience Replay. However, experience replay is memory intensive. Experience replay also forces us to use only offline learning algorithm such as Q-learning. Authors suggest to use multiple agents in parallel. These multiple agents update a shared global parameters. The **benefits** of using multiple agents are as following:
1. The use of multiple agents provides a stabilizing effect.
2. Learning can be much faster without using GPUs. It is possible to run the agents as a CPU thread. Learning is faster because more updates are being made and more data is being consumed in the same time because of multiple agents.
3. Learning is more robust and stable because there exist a wide range of learning rates and initial weights for which a good score can be achieved.
**Key Points**
1. The best performing algorithm is Asynchronous Advantage Actor Critic (A3C).
2. A3C uses $n-$step updates to tradeoff between bias and variance in the policy gradient. Essentially, the policy-gradient update is proportional to
$$
\nabla \log \pi(a_t | x_t; \theta) (r_t + \gamma r_{t+1}+\ldots + \gamma^{n-1}r_{t+n-1} + \gamma^n V(x_{t+n}) - V(x_t))
$$
where $V(\cdot)$ is the value function of the underlying MDP.
3. All the parameters in the value network and policy network are shared except the last layer that exclusively predict the action-probabilities and values.
4. The authors found that the use of an entropy bonus helped the network to not converge into sub-optimal policies.
5. The hyper-parameters (learning rate and gradient-norm clipping) were chosen by a random search on 6 games and keep fixed for the rest of the games.
6. A3C-LSTM also incorporates a LSTM layer with 128 cells. Each cell outputs the action probabilities and value function.