First published: 2016/06/20 (8 years ago) Abstract: While gradient descent has proven highly successful in learning connection
weights for neural networks, the actual structure of these networks is usually
determined by hand, or by other optimization algorithms. Here we describe a
simple method to make network structure differentiable, and therefore
accessible to gradient descent. We test this method on recurrent neural
networks applied to simple sequence prediction problems. Starting with initial
networks containing only one node, the method automatically builds networks
that successfully solve the tasks. The number of nodes in the final network
correlates with task difficulty. The method can dynamically increase network
size in response to an abrupt complexification in the task; however, reduction
in network size in response to task simplification is not evident for
reasonable meta-parameters. The method does not penalize network performance
for these test tasks: variable-size networks actually reach better performance
than fixed-size networks of higher, lower or identical size. We conclude by
discussing how this method could be applied to more complex networks, such as
feedforward layered networks, or multiple-area networks of arbitrary shape.
The paper describes a procedure for topology pruning based on L1 norm to make weights small and a threshold for deleting them alltogether.
It is similar to [Optimal Brain Damage](https://arxiv.org/abs/1606.06216) and [Optimal Brain Surgeon](http://ee.caltech.edu/Babak/pubs/conferences/00298572.pdf).