This paper presents a generative model for natural image patches which takes into account occlusions and the translation invariance of features. The model consists of a set of masks and a set of features which can be translated throughout the patch. Given a set of translations for the masks and features the patch is then generated by sampling (conditionally) independent Gaussian noise. An inference framework for the parameters is proposed and is demonstrated on synthetic data with convincing results. Additionally, experiments are run on natural image patches and the method learns a set of masks and features for natural images. When combined together the resulting receptive fields look mostly like Gabors, but some of them have a globular structures.