Image Transformer
Niki Parmar
and
Ashish Vaswani
and
Jakob Uszkoreit
and
Łukasz Kaiser
and
Noam Shazeer
and
Alexander Ku
and
Dustin Tran
arXiv e-Print archive - 2018 via Local arXiv
Keywords:
cs.CV
First published: 2018/02/15 (6 years ago) Abstract: Image generation has been successfully cast as an autoregressive sequence
generation or transformation problem. Recent work has shown that self-attention
is an effective way of modeling textual sequences. In this work, we generalize
a recently proposed model architecture based on self-attention, the
Transformer, to a sequence modeling formulation of image generation with a
tractable likelihood. By restricting the self-attention mechanism to attend to
local neighborhoods we significantly increase the size of images the model can
process in practice, despite maintaining significantly larger receptive fields
per layer than typical convolutional neural networks. While conceptually
simple, our generative models significantly outperform the current state of the
art in image generation on ImageNet, improving the best published negative
log-likelihood on ImageNet from 3.83 to 3.77. We also present results on image
super-resolution with a large magnification ratio, applying an encoder-decoder
configuration of our architecture. In a human evaluation study, we find that
images generated by our super-resolution model fool human observers three times
more often than the previous state of the art.