Optimizing the Latent Space of Generative Networks
Bojanowski, Piotr
and
Joulin, Armand
and
Lopez-Paz, David
and
Szlam, Arthur
arXiv e-Print archive - 2017 via Local Bibsonomy
Keywords:
dblp
An algorithm named GLO is proposed in this paper. The objective function of GLO:
$$\min_{\theta}\frac{1}{N}\sum_{i=1}^N\[\min_{z_i}l(g^\theta(z_i),x_i)\]$$
This idea dates back to [Dictionary Learning](https://en.wikipedia.org/wiki/Sparse_dictionary_learning). ![](https://wikimedia.org/api/rest_v1/media/math/render/svg/81449a31e07ad388801379c804b73e6d1f044ce2)
It can be viewed as a nonlinear version of the dictionary learning by
1. replace the dictionary $D$ with the function $g^{\theta}$.
2. replace $r$ with $z$.
3. use $l_2$ loss function.
Although in this way, the generator could be learned without the hassles caused by GAN objective, there could be problems.
With this method, the space of the latent vector $z$ could be structured. Although the author project $z$ to a unit ball if it falls outside, there is no guarantee that the trained $z$ would remain a Gaussian distribution that it was originally initialized from.
This could cause the problem that not every sampled noise could reconstruct a valid image, and the linear interpolation could be problematic if the support of the marginalized $p(z)$ is not a convex set.