#### Problem addressed:
Training specific generative model under milder unsupervised assumption
#### Summary:
This paper implemented an attention based scheme for learning generative models, which could make the unsupervised learning more applicable in practice. In common unsupervised settings, one would assume that the unsupervised data is already in the desired format that one could used directly, which could be a very strong assumption. In this work, the demonstrated a specific application of the idea for training face models. They use a canonical low resolution face model that model the object in mind, alone with a search scheme that resembles the attention to search the face region in a high resolution image. The whole scheme is formalized as a full probabilistic model, and the attention is thus implemented as a inference through the model. The probabilistic model is implemented using RBMs. For inference, they imployed hybrid Monte Carlo, as with all MCMC methods, it is hard for the sampling methods to go between modes that are separated by low density areas. To overcome this, they instead used convnet to propose the moves and used hmc with the convnet initilized states so that the full system is still probabilistic. The result demonstrated are pretty interesting.
#### Novelty:
The application of visual attention using RBM with probabilistic inference.
#### Drawbacks:
The inference do need a good convnet for initilization, and it seems the mixing of Markov chain is a big problem.
#### Datasets:
Caltech and CMU face dataset
#### Resources:
A neurobiological model of visual attention and
invariant pattern recognition based on dynamic routing of information, is the paper of visual attention
#### Presenter:
Yingbo Zhou