First published: 2016/05/23 (8 years ago) Abstract: Deep residual networks were shown to be able to scale up to thousands of
layers and still have improving performance. However, each fraction of a
percent of improved accuracy costs nearly doubling the number of layers, and so
training very deep residual networks has a problem of diminishing feature
reuse, which makes these networks very slow to train. To tackle these problems,
in this paper we conduct a detailed experimental study on the architecture of
ResNet blocks, based on which we propose a novel architecture where we decrease
depth and increase width of residual networks. We call the resulting network
structures wide residual networks (WRNs) and show that these are far superior
over their commonly used thin and very deep counterparts. For example, we
demonstrate that even a simple 16-layer-deep wide residual network outperforms
in accuracy and efficiency all previous deep residual networks, including
thousand-layer-deep networks, achieving new state-of-the-art results on
CIFAR-10, CIFAR-100 and SVHN.