[link]
This paper proposed a method to locate an object based on an image and a sentence describing objects in the image. Then, predicting a new visual concept embedding based on two graphs (1) a graph that describes the relationship between objects in a supplemental sentence describing several objects, and (2) a graph that describes the relationship between the detected object in the image and example images related to objects in the supplementary sentence. This embedding can be used for many downstream tasks such as Visual entailment, Visual Reasoning. For example, in the domain 1 image, the new visual concept is red, and the model can locate where is the red cub in the image (1a). Then, in (1b) the model can interpret the supplemental sentences that relate the novel concept with other concepts. https://i.imgur.com/yBIteYT.png To locate the box that describes the object, this work utilized MaskRCNN to first detect the object in the scene, then used a Neuro-Symbolic Program to match the object mentioned in the input sentence with the detected objects by MaskRCNN. https://i.imgur.com/2cG9IUX.png To learn the concept embedding for that object, this work needs a supplemental sentence that describes several objects, they are known concepts except one is a novel concept. Then, building two graphs $GNN_{concept}$, and $GNN_{example}$. In $GNN_{concept}$, this is a graph representing the relationship between known concepts and the novel concept. For example in this graph, "White-eyed Vireo" is a new concept. https://i.imgur.com/1LjirJz.png In $GNN_{example}$, this is a graph representing the relationship between the detected object that represents the novel object and example images of the novel object. https://i.imgur.com/SdR74Vu.png Then learn the concept embedding for this novel concept. https://i.imgur.com/YGYEPvc.png https://i.imgur.com/VXOBn6n.png |
[link]
what is the paper doing? This paper proposed a way to explain the model decision by human-readable concepts. For example, if the model thinks the following image is a black-throated sparrow, then a human can understand this decision via input descriptors. https://i.imgur.com/xVleDhp.png The descriptors were obtained from GPT-3, they got 500 descriptors for each class and then remove the class name in each descriptor. Then, for each class, they chose $k$ concepts to make sure that every class has an equal amount of concepts. After that, they put these concepts into a concept selection module to select a more fine-grained subset of concepts for each class. Then, they put these concepts and the image into CLIP to learn the score for each concept. Finally, they put a Class-concept weight matrix on top of CLIP to fine-tune these scores and output the predicted class name. Note that, this weight matrix was initialized with language priors. https://i.imgur.com/r9Op5Lm.png |
[link]
The paper proposed a new object detection method to detect novel classes by using Conditional Matching. This detector can be conditioned on either image or text, which means a user can use an image or text to let the model detect the corresponding bounding boxes in the picture. This model has 2 changes compared to other open-vocabulary detectors: 1) Other detectors rely on Region Proposal Network (RPN) which can not cover all the objects in a picture, so it will worsen the performance of detecting novel objects. So in this work, they use CLIP to detect novel objects, it is better than RPN because it uses queries as a reader to read the whole picture, then these queries can cover many objects in the picture. https://i.imgur.com/GqvvSVs.png 2) Other detectors rely on Bipartite Matching to match between class label names and detected bounding boxes. But the downside of Bipartite Matching is that it can not match the novel objects with any label names because the novel objects do not have the labels. So, in this work, they proposed to use Conditional Matching which turns the matching problem into a binary matching problem. By using Conditional Matching, an object can be assigned to a "matched" or "not matched" label. https://i.imgur.com/FjI2iub.png |
[link]
This paper proposed a way to do classification using primitive concepts such as color, shape, texture, etc. The framework is simple, they have two sub-models: (1) the first one is a trained VL model such as CLIP, ViLT, and ALBEF. The input of this step is the primitive concepts or let's say, attribute concepts and an image, then the output will be the scores for each concept. (2) the second one is a linear model that uses the concepts and their scores to do classification. This model is trained in a supervised manner. https://i.imgur.com/7WMmGyv.png |
[link]
Open-vocabulary semantic segmentation is a method to generate semantic segment regions based on text descriptions. Due to the text descriptions, this model can detect unseen objects that have not been seen in the training phase. Some works create two-stage methods to first create class-agnostic segments and then use CLIP to assign each segment to a phrase. https://i.imgur.com/eyME6i1.png To compute the prediction for an image, they ensemble two types of prediction scores. (1) If we want to classify a mask into $K$ classes, firstly, we encode $K$ class names into $K$ phrase embedding, each phrase embedding is denoted as $t_{k}$, and also encode the mask into a visual embedding, it is denoted as $v_{i}$, then calculate the score $p_{k}$ between $K$ phrase embedding and the visual embedding. $p_{k} = e(sigmoid(v_{i}, t_{k})/temperature)/\sum(e(sigmoid(v_{i}, t_{k})/temperature))$ (2) Another way to classify a mask into $K$ classes is to feed the mask into the CLIP vision encoder and reduce the size to $K$ embedding vector, to get the score $p^{'}_{k}$. Then, the final prediction will be the ensemble between these two scores, $p = p_{k}^{1-lambda}*p^{' lambda}_{k}$ where $lambda \in [0,1]$ But CLIP does not work well on masked images (segments), because CLIP was trained on the full image resolution. A critical problem with masked images is that it contains blank areas, so when these areas are fed into CLIP, they will become zero tokens, and according to the paper, these tokens not only bring no information but also bring domain distribution shift to the model. In this work, they made CLIP work well on masked images by converting these zero tokens into learnable tokens, and this is called mask prompt. https://i.imgur.com/muhdGxP.png |
[link]
Transformer is proposed to capture long-range information with the self-attention mechanism, but it comes with quadratic computation cost and lacks multi-resolution information. Then, Swin Transformer introduces local-window-self-attention to reduce the cost to linear w.r.t image size, shifted-window-attention to capture cross-window information and finally exploits multi-resolution information with hierarchical architecture. But shifted-window-attention struggles to capture long-range information due to the small coverage area of shifted-window-attention and lacks inductive-bias like ViT. Finally, Global Context ViT is proposed to address the limitations of the Swin Transformer. Improvements: (1) Unlike Swin Transformer this paper uses global context self-attention, with local self-attention, rather than shifted window self-attention, to model both long and short-range dependencies. (2) Even though global-window-attention is a window-attention but it takes leverage of global query which contains global information and hence captures long-range information. (3) In addition, this paper compensates for the lack of the inductive bias that exists in both ViTs and Swin Transformers by utilizing a CNN-based module. Key components: Stem/PatchEmbed: A stem/patchify layer processes images at the network’s beginning. For this network, it creates patches/tokens and converts them into embeddings. Level: It is the repetitive building block that extracts features using different blocks. Global Token Gen./FeatExtract: It generates global tokens/patches with Depthwise-CNN, SE (Squeeze-Excitation), CNN and MaxPooling. So basically it's a Feature Extractor. Block: It is the repetitive module that applies attention to the features and projects them to a certain dimension. Local-MSA: Local Multi head Self Attention. Global-MSA: Global Multi head Self Attention. MLP: Linear layer that projects a vector to another dimension. Downsample/ReduceSize: It is very similar to Global Token Gen. module except it uses CNN instead of MaxPooling to downsample with additional Layer Normalization modules. Head: It is the module responsible for the classification task. Pooling: It converts N×2D features to N×1D features. Classifier: It processes N×1D features to make a decision about class. I annotated like this to make it easier to digest: https://i.imgur.com/bTqIUH2.png |
[link]
This work enforced vision-language pretraining models to comprehend events and associated argument (participant) roles. https://i.imgur.com/TH7cOfZ.png To achieve this, they created a framework including 3 steps: https://i.imgur.com/8fpOA1r.png (1) Event structural knowledge extraction including (a) text extraction: using SOTA text information extraction system to extract events (ex: agent, entity, instrument), (b) image extraction: using Faster RCNN trained on Open Images to detect objects. (c) Primary event detection: the primary event is the event that is closer to the root of dependency parsing tree, and has larger number of arguments, higher event type frequency, and higher similarity between trigger word and the image using CLIP. (2) Event structure driven negative sampling: the negatives and positives can help the text and vision encoders learn robust features (encoders can learn why they are wrong, and why they are correct). To do that, they have 3 types of negatives: (a) negative event sampling: compute the confusion matrix for the event types and select the top one as the predicted event type, then event types whose visual features are ambiguous with the primary event type will be the negative events. (b) Negative Argument Sampling: if there are multiple roles, they will perform a right-rotation of the argument role sequence to get the negative argument samples. If there are only one argument for the event, compute the confusion matrix of the text argument extraction system (c) Description Generation: To encode positive and negative event structures, they have multiple prompt functions such as, single template-based prompt, composed template-based prompt, continuos prompt, caption editing, then use 5 manual event description examples as the input of the GPT-3, the output will be a fine-grained event description. https://i.imgur.com/fPo0UpH.png https://i.imgur.com/vIWv4lc.png (3) Event Graph Alignment via Optimal Transport Each event and its arguments can be organized as a graph. Encoding event graph structures enables the model to capture the interactions between events and arguments. For example, the injured man should be aligned with the ENTITY being transported, rather than the AGENT. https://i.imgur.com/NiWfNe4.png There are 3 types of alignments: (a) Image-level Alignment: computes consine similarity $s(t,i)$ and distance $d(t,i)$ between the text t and image i (2) Entity-level Alignment: computes the cosine similarity between text entity $t_{e}$ and image object $i_{o}$, where $t_{e}$ is the text mention of entity e, and $t_{e}$ is its embedding contextualized on the sentence, this contextualized embedding is encoded using Text Transformer, and apply average pooling over the tokens in the entity mention $t_{e}$. Similarly, $i_{o}$ is the bounding box of object o and $i_{o}$ is its embedding contextualized on the image, based on the average pooling over the vision transformer representations of the patches covered in the bounding box (3) Event-level Alignment: to obtain a global alignment score based on the structures of two graphs, we use the OT to get the minimal distance $d(G_{t}, G_{i})$ between text event graph $G_{t}$ and image event graph $G_{i}$. Finally, train the whole framework using Contrastive Learning. |