CLIP-Event: Connecting Text and Images with Event Structures
Manling Li
and
Ruochen Xu
and
Shuohang Wang
and
Luowei Zhou
and
Xudong Lin
and
Chenguang Zhu
and
Michael Zeng
and
Heng Ji
and
Shih-Fu Chang
arXiv e-Print archive - 2022 via Local arXiv
Keywords:
cs.CV, cs.AI
First published: 2024/11/23 (just now) Abstract: Vision-language (V+L) pretraining models have achieved great success in
supporting multimedia applications by understanding the alignments between
images and text. While existing vision-language pretraining models primarily
focus on understanding objects in images or entities in text, they often ignore
the alignment at the level of events and their argument structures. In this
work, we propose a contrastive learning framework to enforce vision-language
pretraining models to comprehend events and associated argument (participant)
roles. To achieve this, we take advantage of text information extraction
technologies to obtain event structural knowledge, and utilize multiple prompt
functions to contrast difficult negative descriptions by manipulating event
structures. We also design an event graph alignment loss based on optimal
transport to capture event argument structures. In addition, we collect a large
event-rich dataset (106,875 images) for pretraining, which provides a more
challenging image retrieval benchmark to assess the understanding of
complicated lengthy sentences. Experiments show that our zero-shot CLIP-Event
outperforms the state-of-the-art supervised model in argument extraction on
Multimedia Event Extraction, achieving more than 5% absolute F-score gain in
event extraction, as well as significant improvements on a variety of
downstream tasks under zero-shot settings.