Online Fast Adaptation and Knowledge Accumulation: a New Approach to Continual Learning
Massimo Caccia
and
Pau Rodriguez
and
Oleksiy Ostapenko
and
Fabrice Normandin
and
Min Lin
and
Lucas Caccia
and
Issam Laradji
and
Irina Rish
and
Alexande Lacoste
and
David Vazquez
and
Laurent Charlin
arXiv e-Print archive - 2020 via Local arXiv
Keywords:
cs.AI, cs.LG
First published: 2020/03/12 (4 years ago) Abstract: Learning from non-stationary data remains a great challenge for machine
learning. Continual learning addresses this problem in scenarios where the
learning agent faces a stream of changing tasks. In these scenarios, the agent
is expected to retain its highest performance on previous tasks without
revisiting them while adapting well to the new tasks. Two new recent
continual-learning scenarios have been proposed. In meta-continual learning,
the model is pre-trained to minimize catastrophic forgetting when trained on a
sequence of tasks. In continual-meta learning, the goal is faster remembering,
i.e., focusing on how quickly the agent recovers performance rather than
measuring the agent's performance without any adaptation. Both scenarios have
the potential to propel the field forward. Yet in their original formulations,
they each have limitations. As a remedy, we propose a more general scenario
where an agent must quickly solve (new) out-of-distribution tasks, while also
requiring fast remembering. We show that current continual learning, meta
learning, meta-continual learning, and continual-meta learning techniques fail
in this new scenario. Accordingly, we propose a strong baseline:
Continual-MAML, an online extension of the popular MAML algorithm. In our
empirical experiments, we show that our method is better suited to the new
scenario than the methodologies mentioned above, as well as standard continual
learning and meta learning approaches.