Estimating or Propagating Gradients Through Stochastic Neurons
Yoshua Bengio
arXiv e-Print archive - 2013 via Local arXiv
Keywords:
cs.LG
First published: 2013/05/14 (11 years ago) Abstract: Stochastic neurons can be useful for a number of reasons in deep learning
models, but in many cases they pose a challenging problem: how to estimate the
gradient of a loss function with respect to the input of such stochastic
neurons, i.e., can we "back-propagate" through these stochastic neurons? We
examine this question, existing approaches, and present two novel families of
solutions, applicable in different settings. In particular, it is demonstrated
that a simple biologically plausible formula gives rise to an an unbiased (but
noisy) estimator of the gradient with respect to a binary stochastic neuron
firing probability. Unlike other estimators which view the noise as a small
perturbation in order to estimate gradients by finite differences, this
estimator is unbiased even without assuming that the stochastic perturbation is
small. This estimator is also interesting because it can be applied in very
general settings which do not allow gradient back-propagation, including the
estimation of the gradient with respect to future rewards, as required in
reinforcement learning setups. We also propose an approach to approximating
this unbiased but high-variance estimator by learning to predict it using a
biased estimator. The second approach we propose assumes that an estimator of
the gradient can be back-propagated and it provides an unbiased estimator of
the gradient, but can only work with non-linearities unlike the hard threshold,
but like the rectifier, that are not flat for all of their range. This is
similar to traditional sigmoidal units but has the advantage that for many
inputs, a hard decision (e.g., a 0 output) can be produced, which would be
convenient for conditional computation and achieving sparse representations and
sparse gradients.
#### Problem addressed:
Gradient estimation for stochastic neurons
#### Summary:
This paper proposed an unbiased estimator of stochastic units so that one can use gradient based learning. In addition, it also proposed a simple, biased estimator called straight through.
#### Novelty:
A new approach for estimating gradient for stochastic units.
#### Drawbacks:
The proposed unbised estimator seems to have large variance, and the biased one seems not performing very well in practice
#### Presenter:
Yingbo Zhou