Drop an Octave: Reducing Spatial Redundancy in Convolutional Neural Networks with Octave Convolution
Yunpeng Chen
and
Haoqi Fan
and
Bing Xu
and
Zhicheng Yan
and
Yannis Kalantidis
and
Marcus Rohrbach
and
Shuicheng Yan
and
Jiashi Feng
arXiv e-Print archive - 2019 via Local arXiv
Keywords:
cs.CV
First published: 2019/04/10 (5 years ago) Abstract: In natural images, information is conveyed at different frequencies where
higher frequencies are usually encoded with fine details and lower frequencies
are usually encoded with global structures. Similarly, the output feature maps
of a convolution layer can also be seen as a mixture of information at
different frequencies. In this work, we propose to factorize the mixed feature
maps by their frequencies and design a novel Octave Convolution (OctConv)
operation to store and process feature maps that vary spatially "slower" at a
lower spatial resolution reducing both memory and computation cost. Unlike
existing multi-scale meth-ods, OctConv is formulated as a single, generic,
plug-and-play convolutional unit that can be used as a direct replacement of
(vanilla) convolutions without any adjustments in the network architecture. It
is also orthogonal and complementary to methods that suggest better topologies
or reduce channel-wise redundancy like group or depth-wise convolutions. We
experimentally show that by simply replacing con-volutions with OctConv, we can
consistently boost accuracy for both image and video recognition tasks, while
reducing memory and computational cost. An OctConv-equipped ResNet-152 can
achieve 82.9% top-1 classification accuracy on ImageNet with merely 22.2
GFLOPs.