Improving Intra- And Inter-Modality Visual Relation for Image Captioning

Yong Wang, Wen Kai Zhang, Qing Liu, Zhengyuan Zhang, Xin Gao, Xian Sun

Research output: Chapter in Book/Report/Conference proceedingConference contribution

20 Scopus citations

Abstract

It is widely shared that capturing relationships among multi-modality features would be helpful for representing and ultimately describing an image. In this paper, we present a novel Intra- and Inter-modality visual Relation Transformer to improve connections among visual features, termed I2RT. Firstly, we propose Relation Enhanced Transformer Block (RETB) for image feature learning, which strengthens intra-modality visual relations among objects. Moreover, to bridge the gap between inter-modality feature representations, we align them explicitly via Visual Guided Alignment (VGA) module. Finally, an end-to-end formulation is adopted to train the whole model jointly. Experiments on the MS-COCO dataset show the effectiveness of our model, leading to improvements on all commonly used metrics on the "Karpathy"test split. Extensive ablation experiments are conducted for the comprehensive analysis of the proposed method.
Original languageEnglish (US)
Title of host publicationMM 2020 - Proceedings of the 28th ACM International Conference on Multimedia
PublisherAssociation for Computing Machinery, Inc
Pages4190-4198
Number of pages9
ISBN (Print)9781450379885
DOIs
StatePublished - Oct 12 2020
Externally publishedYes

Bibliographical note

Generated from Scopus record by KAUST IRTS on 2023-09-21

Fingerprint

Dive into the research topics of 'Improving Intra- And Inter-Modality Visual Relation for Image Captioning'. Together they form a unique fingerprint.

Cite this