Optimizing Agent Behavior over Long Time Scales by Transporting Value
Chia-Chun Hung
and
Timothy Lillicrap
and
Josh Abramson
and
Yan Wu
and
Mehdi Mirza
and
Federico Carnevale
and
Arun Ahuja
and
Greg Wayne
arXiv e-Print archive - 2018 via Local arXiv
Keywords:
cs.AI, cs.LG
First published: 2018/10/15 (6 years ago) Abstract: Humans spend a remarkable fraction of waking life engaged in acts of "mental
time travel". We dwell on our actions in the past and experience satisfaction
or regret. More than merely autobiographical storytelling, we use these event
recollections to change how we will act in similar scenarios in the future.
This process endows us with a computationally important ability to link actions
and consequences across long spans of time, which figures prominently in
addressing the problem of long-term temporal credit assignment; in artificial
intelligence (AI) this is the question of how to evaluate the utility of the
actions within a long-duration behavioral sequence leading to success or
failure in a task. Existing approaches to shorter-term credit assignment in AI
cannot solve tasks with long delays between actions and consequences. Here, we
introduce a new paradigm for reinforcement learning where agents use recall of
specific memories to credit actions from the past, allowing them to solve
problems that are intractable for existing algorithms. This paradigm broadens
the scope of problems that can be investigated in AI and offers a mechanistic
account of behaviors that may inspire computational models in neuroscience,
psychology, and behavioral economics.