Optimizing Agent Behavior over Long Time Scales by Transporting Value
Chia-Chun Hung
and
Timothy Lillicrap
and
Josh Abramson
and
Yan Wu
and
Mehdi Mirza
and
Federico Carnevale
and
Arun Ahuja
and
Greg Wayne
arXiv e-Print archive - 2018 via Local arXiv
Keywords:
cs.AI, cs.LG
First published: 2018/10/15 (6 years ago) Abstract: Humans spend a remarkable fraction of waking life engaged in acts of "mental
time travel". We dwell on our actions in the past and experience satisfaction
or regret. More than merely autobiographical storytelling, we use these event
recollections to change how we will act in similar scenarios in the future.
This process endows us with a computationally important ability to link actions
and consequences across long spans of time, which figures prominently in
addressing the problem of long-term temporal credit assignment; in artificial
intelligence (AI) this is the question of how to evaluate the utility of the
actions within a long-duration behavioral sequence leading to success or
failure in a task. Existing approaches to shorter-term credit assignment in AI
cannot solve tasks with long delays between actions and consequences. Here, we
introduce a new paradigm for reinforcement learning where agents use recall of
specific memories to credit actions from the past, allowing them to solve
problems that are intractable for existing algorithms. This paradigm broadens
the scope of problems that can be investigated in AI and offers a mechanistic
account of behaviors that may inspire computational models in neuroscience,
psychology, and behavioral economics.
This builds on the previous ["MERLIN"](https://arxiv.org/abs/1803.10760) paper. First they introduce the RMA agent, which is a simplified version of MERLIN which uses model based RL and long term memory. They give the agent long term memory by letting it choose to save and load the agent's working memory (represented by the LSTM's hidden state).
Then they add credit assignment, similar to the RUDDER paper, to get the "Temporal Value Transport" (TVT) agent that can plan long term in the face of distractions. **The critical insight here is that they use the agent's memory access to decide on credit assignment**. So if the model uses a memory from 512 steps ago, that action from 512 steps ago gets lots of credit for the current reward.
They use various tasks, for example a maze with a distracting task then a memory retrieval task. For example, after starting in a maze with, say, a yellow wall, the agent needs to collect apples. This serves as a distraction, ensuring the agent can recall memories even after distraction. At the end of the maze it needs to remember that initial color (e.g. yellow) in order to choose the exit of the correct color.
They include performance graphs showing that memory or even better memory plus credit assignment are a significant help in this, and similar, tasks.