|
Welcome to ShortScience.org! |
|
|
[link]
### What is BN:
* Batch Normalization (BN) is a normalization method/layer for neural networks.
* Usually inputs to neural networks are normalized to either the range of [0, 1] or [-1, 1] or to mean=0 and variance=1. The latter is called *Whitening*.
* BN essentially performs Whitening to the intermediate layers of the networks.
### How its calculated:
* The basic formula is $x^* = (x - E[x]) / \sqrt{\text{var}(x)}$, where $x^*$ is the new value of a single component, $E[x]$ is its mean within a batch and `var(x)` is its variance within a batch.
* BN extends that formula further to $x^{**} = gamma * x^* +$ beta, where $x^{**}$ is the final normalized value. `gamma` and `beta` are learned per layer. They make sure that BN can learn the identity function, which is needed in a few cases.
* For convolutions, every layer/filter/kernel is normalized on its own (linear layer: each neuron/node/component). That means that every generated value ("pixel") is treated as an example. If we have a batch size of N and the image generated by the convolution has width=P and height=Q, we would calculate the mean (E) over `N*P*Q` examples (same for the variance).
### Theoretical effects:
* BN reduces *Covariate Shift*. That is the change in distribution of activation of a component. By using BN, each neuron's activation becomes (more or less) a gaussian distribution, i.e. its usually not active, sometimes a bit active, rare very active.
* Covariate Shift is undesirable, because the later layers have to keep adapting to the change of the type of distribution (instead of just to new distribution parameters, e.g. new mean and variance values for gaussian distributions).
* BN reduces effects of exploding and vanishing gradients, because every becomes roughly normal distributed. Without BN, low activations of one layer can lead to lower activations in the next layer, and then even lower ones in the next layer and so on.
### Practical effects:
* BN reduces training times. (Because of less Covariate Shift, less exploding/vanishing gradients.)
* BN reduces demand for regularization, e.g. dropout or L2 norm. (Because the means and variances are calculated over batches and therefore every normalized value depends on the current batch. I.e. the network can no longer just memorize values and their correct answers.)
* BN allows higher learning rates. (Because of less danger of exploding/vanishing gradients.)
* BN enables training with saturating nonlinearities in deep networks, e.g. sigmoid. (Because the normalization prevents them from getting stuck in saturating ranges, e.g. very high/low values for sigmoid.)

*BN applied to MNIST (a), and activations of a randomly selected neuron over time (b, c), where the middle line is the median activation, the top line is the 15th percentile and the bottom line is the 85th percentile.*
-------------------------
### Rough chapter-wise notes
* (2) Towards Reducing Covariate Shift
* Batch Normalization (*BN*) is a special normalization method for neural networks.
* In neural networks, the inputs to each layer depend on the outputs of all previous layers.
* The distributions of these outputs can change during the training. Such a change is called a *covariate shift*.
* If the distributions stayed the same, it would simplify the training. Then only the parameters would have to be readjusted continuously (e.g. mean and variance for normal distributions).
* If using sigmoid activations, it can happen that one unit saturates (very high/low values). That is undesired as it leads to vanishing gradients for all units below in the network.
* BN fixes the means and variances of layer inputs to specific values (zero mean, unit variance).
* That accomplishes:
* No more covariate shift.
* Fixes problems with vanishing gradients due to saturation.
* Effects:
* Networks learn faster. (As they don't have to adjust to covariate shift any more.)
* Optimizes gradient flow in the network. (As the gradient becomes less dependent on the scale of the parameters and their initial values.)
* Higher learning rates are possible. (Optimized gradient flow reduces risk of divergence.)
* Saturating nonlinearities can be safely used. (Optimized gradient flow prevents the network from getting stuck in saturated modes.)
* BN reduces the need for dropout. (As it has a regularizing effect.)
* How BN works:
* BN normalizes layer inputs to zero mean and unit variance. That is called *whitening*.
* Naive method: Train on a batch. Update model parameters. Then normalize. Doesn't work: Leads to exploding biases while distribution parameters (mean, variance) don't change.
* A proper method has to include the current example *and* all previous examples in the normalization step.
* This leads to calculating in covariance matrix and its inverse square root. That's expensive. The authors found a faster way.
* (3) Normalization via Mini-Batch Statistics
* Each feature (component) is normalized individually. (Due to cost, differentiability.)
* Normalization according to: `componentNormalizedValue = (componentOldValue - E[component]) / sqrt(Var(component))`
* Normalizing each component can reduce the expressitivity of nonlinearities. Hence the formula is changed so that it can also learn the identiy function.
* Full formula: `newValue = gamma * componentNormalizedValue + beta` (gamma and beta learned per component)
* E and Var are estimated for each mini batch.
* BN is fully differentiable. Formulas for gradients/backpropagation are at the end of chapter 3 (page 4, left).
* (3.1) Training and Inference with Batch-Normalized Networks
* During test time, E and Var of each component can be estimated using all examples or alternatively with moving averages estimated during training.
* During test time, the BN formulas can be simplified to a single linear transformation.
* (3.2) Batch-Normalized Convolutional Networks
* Authors recommend to place BN layers after linear/fully-connected layers and before the ninlinearities.
* They argue that the linear layers have a better distribution that is more likely to be similar to a gaussian.
* Placing BN after the nonlinearity would also not eliminate covariate shift (for some reason).
* Learning a separate bias isn't necessary as BN's formula already contains a bias-like term (beta).
* For convolutions they apply BN equally to all features on a feature map. That creates effective batch sizes of m\*pq, where m is the number of examples in the batch and p q are the feature map dimensions (height, width). BN for linear layers has a batch size of m.
* gamma and beta are then learned per feature map, not per single pixel. (Linear layers: Per neuron.)
* (3.3) Batch Normalization enables higher learning rates
* BN normalizes activations.
* Result: Changes to early layers don't amplify towards the end.
* BN makes it less likely to get stuck in the saturating parts of nonlinearities.
* BN makes training more resilient to parameter scales.
* Usually, large learning rates cannot be used as they tend to scale up parameters. Then any change to a parameter amplifies through the network and can lead to gradient explosions.
* With BN gradients actually go down as parameters increase. Therefore, higher learning rates can be used.
* (something about singular values and the Jacobian)
* (3.4) Batch Normalization regularizes the model
* Usually: Examples are seen on their own by the network.
* With BN: Examples are seen in conjunction with other examples (mean, variance).
* Result: Network can't easily memorize the examples any more.
* Effect: BN has a regularizing effect. Dropout can be removed or decreased in strength.
* (4) Experiments
* (4.1) Activations over time
** They tested BN on MNIST with a 100x100x10 network. (One network with BN before each nonlinearity, another network without BN for comparison.)
** Batch Size was 60.
** The network with BN learned faster. Activations of neurons (their means and variances over several examples) seemed to be more consistent during training.
** Generalization of the BN network seemed to be better.
* (4.2) ImageNet classification
** They applied BN to the Inception network.
** Batch Size was 32.
** During training they used (compared to original Inception training) a higher learning rate with more decay, no dropout, less L2, no local response normalization and less distortion/augmentation.
** They shuffle the data during training (i.e. each batch contains different examples).
** Depending on the learning rate, they either achieve the same accuracy (as in the non-BN network) in 14 times fewer steps (5x learning rate) or a higher accuracy in 5 times fewer steps (30x learning rate).
** BN enables training of Inception networks with sigmoid units (still a bit lower accuracy than ReLU).
** An ensemble of 6 Inception networks with BN achieved better accuracy than the previously best network for ImageNet.
* (5) Conclusion
** BN is similar to a normalization layer suggested by Gülcehre and Bengio. However, they applied it to the outputs of nonlinearities.
** They also didn't have the beta and gamma parameters (i.e. their normalization could not learn the identity function).
![]() |
|
[link]
#### Introduction The [paper](http://arxiv.org/pdf/1502.05698v10) presents a framework and a set of synthetic toy tasks (classified into skill sets) for analyzing the performance of different machine learning algorithms. #### Tasks * **Single/Two/Three Supporting Facts**: Questions where a single(or multiple) supporting facts provide the answer. More is the number of supporting facts, tougher is the task. * **Two/Three Supporting Facts**: Requires differentiation between objects and subjects. * **Yes/No Questions**: True/False questions. * **Counting/List/Set Questions**: Requires ability to count or list objects having a certain property. * **Simple Negation and Indefinite Knowledge**: Tests the ability to handle negation constructs and model sentences that describe a possibility and not a certainty. * **Basic Coreference, Conjunctions, and Compound Coreference**: Requires ability to handle different levels of coreference. * **Time Reasoning**: Requires understanding the use of time expressions in sentences. * **Basic Deduction and Induction**: Tests basic deduction and induction via inheritance of properties. * **Position and Size Reasoning** * **Path Finding**: Find path between locations. * **Agent's Motivation**: Why an agent performs an action ie what is the state of the agent. #### Dataset * The dataset is available [here](https://research.facebook.com/research/-babi/) and the source code to generate the tasks is available [here](https://github.com/facebook/bAbI-tasks). * The different tasks are independent of each other. * For supervised training, the set of relevant statements is provided along with questions and answers. * The tasks are available in English, Hindi and shuffled English words. #### Data Simulation * Simulated world consists of entities of various types (locations, objects, persons etc) and of various actions that operate on these entities. * These entities have their internal state and follow certain rules as to how they interact with other entities. * Basic simulations are of the form: <actor> <action> <object> eg Bob go school. * To add variations, synonyms are used for entities and actions. #### Experiments ##### Methods * N-gram classifier baseline * LSTMs * Memory Networks (MemNNs) * Structured SVM incorporating externally labeled data ##### Extensions to Memory Networks * **Adaptive Memories** - learn the number of hops to be performed instead of using the fixed value of 2 hops. * **N-grams** - Use a bag of 3-grams instead of a bag-of-words. * **Nonlinearity** - Apply 2-layer neural network with *tanh* nonlinearity in the matching function. ##### Structured SVM * Uses coreference resolution and semantic role labeling (SRL) which are themselves trained on a large amount of data. * First train with strong supervision to find supporting statements and then use a similar SVM to find the response. ##### Results * Standard MemNN outperform N-gram and LSTM but still fail on a number of tasks. * MemNNs with Adaptive Memory improve the performance for multiple supporting facts task and basic induction task. * MemNNs with N-gram modeling improves results when word order matters. * MemNNs with Nonlinearity performs well on Yes/No tasks and indefinite knowledge tasks. * Structured SVM outperforms vanilla MemNNs but not as good as MemNNs with modifications. * Structured SVM performs very well on path finding task due to its non-greedy search approach. ![]() |
|
[link]
FaceNet directly maps face images to $\mathbb{R}^{128}$ where distances directly correspond to a measure of face similarity. They use a triplet loss function. The triplet is (face of person A, other face of person A, face of person which is not A). Later, this is called (anchor, positive, negative).
The loss function is learned and inspired by LMNN. The idea is to minimize the distance between the two images of the same person and maximize the distance to the other persons image.
## LMNN
Large Margin Nearest Neighbor (LMNN) is learning a pseudo-metric
$$d(x, y) = (x -y) M (x -y)^T$$
where $M$ is a positive-definite matrix. The only difference between a pseudo-metric and a metric is that $d(x, y) = 0 \Leftrightarrow x = y$ does not hold.
## Curriculum Learning: Triplet selection
Show simple examples first, then increase the difficulty. This is done by selecting the triplets.
They use the triplets which are *hard*. For the positive example, this means the distance between the anchor and the positive example is high. For the negative example this means the distance between the anchor and the negative example is low.
They want to have
$$||f(x_i^a) - f(x_i^p)||_2^2 + \alpha < ||f(x_i^a) - f(x_i^n)||_2^2$$
where $\alpha$ is a margin and $x_i^a$ is the anchor, $x_i^p$ is the positive face example and $x_i^n$ is the negative example. They increase $\alpha$ over time. It is crucial that $f$ maps the images not in the complete $\mathbb{R}^{128}$, but on the unit sphere. Otherwise one could double $\alpha$ by simply making $f' = 2 \cdot f$.
## Tasks
* **Face verification**: Is this the same person?
* **Face recognition**: Who is this person?
## Datasets
* 99.63% accuracy on Labeled FAces in the Wild (LFW)
* 95.12% accuracy on YouTube Faces DB
## Network
Two models are evaluated: The [Zeiler & Fergus model](http://www.shortscience.org/paper?bibtexKey=journals/corr/ZeilerF13) and an architecture based on the [Inception model](http://www.shortscience.org/paper?bibtexKey=journals/corr/SzegedyLJSRAEVR14).
## See also
* [DeepFace](http://www.shortscience.org/paper?bibtexKey=conf/cvpr/TaigmanYRW14#martinthoma)
![]() |
|
[link]
This paper argues for the use of normalizing flows - a way of building up new probability distributions by applying multiple sets of invertible transformations to existing distributions - as a way of building more flexible variational inference models. The central premise of a variational autoencoder is that of learning an approximation to the posterior distribution of latent variables - p(z|x) - and parameterizing that distribution according to values produced by a neural network. In typical practice, this has meant that VAEs are limited in terms of the complexity of latent variable distributions they can encode, since using an analytically specified distribution tends to limit you to simpler distributional shapes - Gaussians, uniform, and the like. Normalizing flows are here proposed as a way to allow for the model to learn more complex forms of posterior distribution. Normalizing flows work off of a fairly simple intuition: if you take samples from a distribution p(x), and then apply a function f(x) to each x in that sample, you can calculate the expected value of your new distribution f(x) by calculating the expectation of f(x) under the old distribution p(x). That is to say: https://i.imgur.com/NStm7zN.png This mathematical transformation has a pretty delightful name - The Law of the Unconscious Statistician - that came from the fact that so many statisticians just treated this identity as a definitional fact, rather than something actually in need of proving (I very much fall into this bucket as well). The implication of this is that if you apply many transformations in sequence to the draws from some simple distribution, you can work with that distribution without explicitly knowing its analytical formulation, just by being able to evaluate - and, importantly - invert the function. The ability to invert the function is key, because of the way you calculate the derivative: by taking the inverse of the determinant of the derivative of your function f(z) with respect to z. (Note here that q(z) is the original distribution you sampled under, and q’(z) is the implicit density you’re trying to estimate, after your function has been applied). https://i.imgur.com/8LmA0rc.png Combining these ideas together: a variational flow autoencoder works by having an encoder network define the parameters of a simple distribution (Gaussian or Uniform), and then running the samples from that distribution through a series of k transformation layers. This final transformed density over z is then given to the decoder to work with. Some important limitations are in place here, the most salient of which is that in order to calculate derivatives, you have to be able to calculate the determinant of the derivative of a given transformation. Due to this constraint, the paper only tests a few transformations where this is easy to calculate analytically - the planar transformation and radial transformation. If you think about transformations of density functions as fundamentally stretching or compressing regions of density, the planar transformation works by stretching along an axis perpendicular to some parametrically defined plane, and the radial transformation works by stretching outward in a radial way around some parametrically defined point. Even though these transformations are individually fairly simple, when combined, they can give you a lot more flexibility in distributional space than a simple Gaussian or Uniform could. https://i.imgur.com/Xf8HgHl.png ![]() |
|
[link]
Miyato et al. propose distributional smoothing (or virtual adversarial training) as defense against adversarial examples. However, I think that both terms do not give a good intuition of what is actually done. Essentially, a regularization term is introduced. Letting $p(y|x,\theta)$ be the learned model, the regularizer is expressed as
$\text{KL}(p(y|x,\theta)|p(y|x+r,\theta)$
where $r$ is the perturbation that maximizes the Kullback-Leibler divergence above, i.e.
$r = \arg\max_r \{\text{KL}(p(y|x,\theta)|p(y|x+r,\theta) | \|r\|_2 \leq \epsilon\}$
with hyper-parameter $\epsilon$. Essentially, the regularizer is supposed to “simulate” adversarial training – thus, the method is also called virtual adversarial training.
The discussed implementation, however, is somewhat cumbersome. In particular, $r$ cannot be computed using first-order methods as the gradient of $\text{KL}$ is $0$ for $r = 0$. So a second-order method is used – for which the Hessian needs to be approximated and the corresponding eigenvectors need to be computed. For me it is unclear why $r$ cannot be initialized randomly to solve this issue … Then, the derivative of the regularizer needs to be computed during training. Here, the authors make several simplifications (such as fixing $\theta$ in the first part of the Kullback-Leibler divergence and ignoring the derivative of $r$ w.r.t. $\theta$).
Overall, however, I like the idea of “virtual” adversarial training as it avoids the need of explicitly using attacks during training to craft adversarial examples. Then, the trained model is often robust against the chosen attacks, but new adversarial examples can be found easily through novel attacks.
Also view this summary at [davidstutz.de](https://davidstutz.de/category/reading/).
![]() |