FALCON: Fast Visual Concept Learning by Integrating Images, Linguistic descriptions, and Conceptual Relations
Lingjie Mei
and
Jiayuan Mao
and
Ziqi Wang
and
Chuang Gan
and
Joshua B. Tenenbaum
arXiv e-Print archive - 2022 via Local arXiv
Keywords:
cs.CV, cs.AI, cs.CL, cs.LG
First published: 2024/11/21 (just now) Abstract: We present a meta-learning framework for learning new visual concepts
quickly, from just one or a few examples, guided by multiple naturally
occurring data streams: simultaneously looking at images, reading sentences
that describe the objects in the scene, and interpreting supplemental sentences
that relate the novel concept with other concepts. The learned concepts support
downstream applications, such as answering questions by reasoning about unseen
images. Our model, namely FALCON, represents individual visual concepts, such
as colors and shapes, as axis-aligned boxes in a high-dimensional space (the
"box embedding space"). Given an input image and its paired sentence, our model
first resolves the referential expression in the sentence and associates the
novel concept with particular objects in the scene. Next, our model interprets
supplemental sentences to relate the novel concept with other known concepts,
such as "X has property Y" or "X is a kind of Y". Finally, it infers an optimal
box embedding for the novel concept that jointly 1) maximizes the likelihood of
the observed instances in the image, and 2) satisfies the relationships between
the novel concepts and the known ones. We demonstrate the effectiveness of our
model on both synthetic and real-world datasets.