Welcome to ShortScience.org! |
[link]
This paper is to mitigate the scene bias in the action recognition task. Scene bias is defined as the model only focusing on scene or object information without paying attention to the actual activity. To mitigate this issue, the author proposed 2 additional types of loss: (1) scene adversarial loss that helps the network to learn features that are suitable for action but invariant to scene type. Hence, reduce the scene bias. (2) human mask confusion loss that prevents a model from predicting the correct action (label) of this video if there is no person in this video. Hence, this can mitigate the scene bias because the model can not predict the correct action based on only the surrounding scene. https://i.imgur.com/BBfWE17.png To mask out the person in the video, they use a human detector to detect and then mask the person out. In the above diagram, there is a gradient reversal layer, which works as follows: In the forward pass, the output is similar to the input. In the backward pass, the output is equal to the input times -1. https://i.imgur.com/hif9ZL9.png This layer comes from Domain Adaptation. In domain adaptation, there is a need to make the distribution of the source and the target domain distinguishable. So, in this work, they want to make the action distribution and the scene distribution distinguishable, which is why they train the action classifier and scene classifier in an adversarial way. https://i.imgur.com/trNJGlm.png And by using the Gradient reversal layer, for the training instances, the action predictor will be trained for predicting the labels of the training instances. The feature extractor will therefore be trained to minimize the classification loss of the action predictor and maximize the classification loss of the scene predictor. As a result, the action will be scene-agnostic. |
[link]
Open-vocabulary semantic segmentation is a method to generate semantic segment regions based on text descriptions. Due to the text descriptions, this model can detect unseen objects that have not been seen in the training phase. Some works create two-stage methods to first create class-agnostic segments and then use CLIP to assign each segment to a phrase. https://i.imgur.com/eyME6i1.png To compute the prediction for an image, they ensemble two types of prediction scores. (1) If we want to classify a mask into $K$ classes, firstly, we encode $K$ class names into $K$ phrase embedding, each phrase embedding is denoted as $t_{k}$, and also encode the mask into a visual embedding, it is denoted as $v_{i}$, then calculate the score $p_{k}$ between $K$ phrase embedding and the visual embedding. $p_{k} = e(sigmoid(v_{i}, t_{k})/temperature)/\sum(e(sigmoid(v_{i}, t_{k})/temperature))$ (2) Another way to classify a mask into $K$ classes is to feed the mask into the CLIP vision encoder and reduce the size to $K$ embedding vector, to get the score $p^{'}_{k}$. Then, the final prediction will be the ensemble between these two scores, $p = p_{k}^{1-lambda}*p^{' lambda}_{k}$ where $lambda \in [0,1]$ But CLIP does not work well on masked images (segments), because CLIP was trained on the full image resolution. A critical problem with masked images is that it contains blank areas, so when these areas are fed into CLIP, they will become zero tokens, and according to the paper, these tokens not only bring no information but also bring domain distribution shift to the model. In this work, they made CLIP work well on masked images by converting these zero tokens into learnable tokens, and this is called mask prompt. https://i.imgur.com/muhdGxP.png |
[link]
Visual Question Answering can not do the counting objects problem properly. So in this paper, they figured out the reason is due to the Soft Attention module, and they also proposed a module that can produce reliable counting from object proposals. There are two challenges in VQA Counting tasks: (1) There is no ground truth label for the objects to be counted. (2) The additional module should not affect performance on non-counting problems. Why Soft Attention is not good for the counting task: One case to explain why Soft Attention limits counting ability: Consider the task of counting cats for two images: an image of a cat and an image that contains two images side-by-side that are copies of the first image. For image 1: after the normalization of the softmax function in the attention, the cat in this image will receive a normalized weight of 1. For image 2: each cat receives a weight of 0.5. Then, the attention module will do the weighted sum to produce an attention feature vector. Because the weighted sum process will average the two cats in the second image back to a single cat, so 2 attention feature vectors of the two images are the same. As a result, the information about possible counts is lost by using the attention map. Counting Component: This component will be in charge of counting objects for an image. This has two things to do: 1) A differentiable mechanism for counting from attention weights. 2) Handling overlapping object proposals to reduce object double-counting. The Counting Component is as follows: https://i.imgur.com/xVGcaov.png Note that, intra-objects are objects that point to the same object and the same class, while inter-objects are objects that point to the different object and the same class. They have three main components: (1) object proposals (4 vertices), the black ones are relevant objects while the white ones are irrelevant objects. Then (2) intra-object edges between duplicate proposals, and (3) blue edges mark the inter-object duplicate edges. Finally, there will be one edge and 2 vertices (2 relevant objects). To illustrate the component in more detail, there are 4 main steps: (1) Input: The component needs n attention weights $a = [a_{1}, a_{2},...,a_{n}]^{T}$ and their corresponding boxes $b = [b_{1}, ..., b_{n}]^{T}$ (2) Deduplication: The goal of this step is to make a graph $A=aa^{T}$ (attention matrix) where each vertex is a bounding box proposal if the $ith$ bounding box is a relevant box, then $a_{i} = 1$ otherwise, $a_{i} = 0$. And the Counting Component will modify this graph to delete those edges until the graph becomes a fully directed graph with self-loops. For example, [a1, a2, a3, a4, a5]=[1,0,1,0,1], the subgraph containing a1, a3, or a5 is a fully directed graph, as follows: https://i.imgur.com/cCKIQ0K.png The illustration for this graph is as follows: https://i.imgur.com/x93gk8c.png Then we will eliminate duplicate edges: (1) intra-object edges and (2) inter-object edges. 1. Intra-object edges First, we eliminate intra-object edges. To achieve this, we need to calculate the distance matrix $D$ where $D_{ij} = 1- IoU(b_{i}, b_{j})$, if $D_{ij}=1$ which means two bounding boxes are quite overlapped, and then should be eliminated. To remove them, multiply the attention matrix $A$, which is calculated before, with the matrix $D$, to remove the connection between duplicate proposals of a single object. https://i.imgur.com/TQAvAnW.png 2. Inter-object edges Second, we eliminate inter-object edges. The main idea is to combine the proposals of the duplicate objects into 1. To do this, scale down the weight of its associated edges (vertices connected to that vertex). For example, if an object has two proposals, the edges involving those proposals should be scaled by 0.5. Essentially, this is averaging the proposal within each base object, since we only use the sum of edge weights to compute the final count. https://i.imgur.com/4An0BAj.png |
[link]
Transformer is proposed to capture long-range information with the self-attention mechanism, but it comes with quadratic computation cost and lacks multi-resolution information. Then, Swin Transformer introduces local-window-self-attention to reduce the cost to linear w.r.t image size, shifted-window-attention to capture cross-window information and finally exploits multi-resolution information with hierarchical architecture. But shifted-window-attention struggles to capture long-range information due to the small coverage area of shifted-window-attention and lacks inductive-bias like ViT. Finally, Global Context ViT is proposed to address the limitations of the Swin Transformer. Improvements: (1) Unlike Swin Transformer this paper uses global context self-attention, with local self-attention, rather than shifted window self-attention, to model both long and short-range dependencies. (2) Even though global-window-attention is a window-attention but it takes leverage of global query which contains global information and hence captures long-range information. (3) In addition, this paper compensates for the lack of the inductive bias that exists in both ViTs and Swin Transformers by utilizing a CNN-based module. Key components: Stem/PatchEmbed: A stem/patchify layer processes images at the network’s beginning. For this network, it creates patches/tokens and converts them into embeddings. Level: It is the repetitive building block that extracts features using different blocks. Global Token Gen./FeatExtract: It generates global tokens/patches with Depthwise-CNN, SE (Squeeze-Excitation), CNN and MaxPooling. So basically it's a Feature Extractor. Block: It is the repetitive module that applies attention to the features and projects them to a certain dimension. Local-MSA: Local Multi head Self Attention. Global-MSA: Global Multi head Self Attention. MLP: Linear layer that projects a vector to another dimension. Downsample/ReduceSize: It is very similar to Global Token Gen. module except it uses CNN instead of MaxPooling to downsample with additional Layer Normalization modules. Head: It is the module responsible for the classification task. Pooling: It converts N×2D features to N×1D features. Classifier: It processes N×1D features to make a decision about class. I annotated like this to make it easier to digest: https://i.imgur.com/bTqIUH2.png |
[link]
This paper aims to learn a sparse semantic representation of texts and images instead of a dense representation trained by CLIP or ALIGN. The sparse embeddings are achieved by: (1) For an input (image or text), extract it to a feature (using Transformer) $h$ where $h_{j}$ corresponds to the $jth$ word in the input. (2) Each $j$ word embedding will be transformed to $p(h_{j})$ in vocabulary space $V$ by using a mapping function (in this paper, this is BERT Masked Language Model MLM). So each $p(h_{j})$ is a token in a vocabulary space $V$. (3) A max pooling layer will be applied to $p(h_{j})$ to get a value denoted for that token. So in the end, we will have a sparse vector living in V-dimensional space. https://i.imgur.com/BTvndLR.png Training: To achieve two goals (1) aligning text and images in the sparse embedding and (2) grounding the sparse vector with the human-understandable word in the vocabulary, they proposed 3-stage training: Stage 1: Training image embedding with masked tokens. In the first stage, they co-trained both the image and text encoders and apply a binary mask on the text embedding. By matching with the masked text embedding, the image encoder is learned to ground its image embedding on the tokens from the pairing text. Therefore, after the stage 1 training, the image embedding is living in the vocabulary’s interpretable space. Stage 2: Training with frozen image encoder. In this stage, they focus on grounding the text embedding to the same interpretable space where the image embedding is trained to reside in from stage 1. The key idea is to let the image encoder teach the text encoder as a teacher model. After stage 2 training, both image and text embeddings are in the same human-interpretable embedding space constructed by the vocabulary. Stage 3: Fine-tuning both encoders, they boosted the image-text matching performance by finetuning both encoders jointly. https://i.imgur.com/PWrEbkk.png To further encourage the sparsity, they proposed to use FLOPs regularization loss such that only a small number of token embeddings in V are non-zeros. |