Differentiable plasticity: training plastic neural networks with backpropagation
Thomas Miconi
and
Jeff Clune
and
Kenneth O. Stanley
arXiv e-Print archive - 2018 via Local arXiv
Keywords:
cs.NE, cs.LG, stat.ML
First published: 2018/04/06 (6 years ago) Abstract: How can we build agents that keep learning from experience, quickly and
efficiently, after their initial training? Here we take inspiration from the
main mechanism of learning in biological brains: synaptic plasticity, carefully
tuned by evolution to produce efficient lifelong learning. We show that
plasticity, just like connection weights, can be optimized by gradient descent
in large (millions of parameters) recurrent networks with Hebbian plastic
connections. First, recurrent plastic networks with more than two million
parameters can be trained to memorize and reconstruct sets of novel,
high-dimensional 1000+ pixels natural images not seen during training.
Crucially, traditional non-plastic recurrent networks fail to solve this task.
Furthermore, trained plastic networks can also solve generic meta-learning
tasks such as the Omniglot task, with competitive results and little parameter
overhead. Finally, in reinforcement learning settings, plastic networks
outperform a non-plastic equivalent in a maze exploration task. We conclude
that differentiable plasticity may provide a powerful novel approach to the
learning-to-learn problem.