First published: 2016/12/06 (7 years ago) Abstract: We develop a probabilistic framework for deep learning based on the Deep
Rendering Mixture Model (DRMM), a new generative probabilistic model that
explicitly capture variations in data due to latent task nuisance variables. We
demonstrate that max-sum inference in the DRMM yields an algorithm that exactly
reproduces the operations in deep convolutional neural networks (DCNs),
providing a first principles derivation. Our framework provides new insights
into the successes and shortcomings of DCNs as well as a principled route to
their improvement. DRMM training via the Expectation-Maximization (EM)
algorithm is a powerful alternative to DCN back-propagation, and initial
training results are promising. Classification based on the DRMM and other
variants outperforms DCNs in supervised digit classification, training 2-3x
faster while achieving similar accuracy. Moreover, the DRMM is applicable to
semi-supervised and unsupervised learning tasks, achieving results that are
state-of-the-art in several categories on the MNIST benchmark and comparable to
state of the art on the CIFAR10 benchmark.