First published: 2018/05/24 (6 years ago) Abstract: Continual learning experiments used in current deep learning papers do not
faithfully assess fundamental challenges of learning continually, masking
weak-points of the suggested approaches instead. We study gaps in such existing
evaluations, proposing essential experimental evaluations that are more
representative of continual learning's challenges, and suggest a
re-prioritization of research efforts in the field. We show that current
approaches fail with our new evaluations and, to analyse these failures, we
propose a variational loss which unifies many existing solutions to continual
learning under a Bayesian framing, as either 'prior-focused' or
'likelihood-focused'. We show that while prior-focused approaches such as EWC
and VCL perform well on existing evaluations, they perform dramatically worse
when compared to likelihood-focused approaches on other simple tasks.