First published: 2017/11/29 (6 years ago) Abstract: Deep convolutional networks have become a popular tool for image generation
and restoration. Generally, their excellent performance is imputed to their
ability to learn realistic image priors from a large number of example images.
In this paper, we show that, on the contrary, the structure of a generator
network is sufficient to capture a great deal of low-level image statistics
prior to any learning. In order to do so, we show that a randomly-initialized
neural network can be used as a handcrafted prior with excellent results in
standard inverse problems such as denoising, super-resolution, and inpainting.
Furthermore, the same prior can be used to invert deep neural representations
to diagnose them, and to restore images based on flash-no flash input pairs.
Apart from its diverse applications, our approach highlights the inductive
bias captured by standard generator network architectures. It also bridges the
gap between two very popular families of image restoration methods:
learning-based methods using deep convolutional networks and learning-free
methods based on handcrafted image priors such as self-similarity. Code and
supplementary material are available at
https://dmitryulyanov.github.io/deep_image_prior .