Abstract
The transformer architecture has been the dominant framework for today's image captioning tasks because of its superior performance. However, existing methods based on transformer often lack the integrated use of multi-level semantic information and are weak in maintaining the relevance of captions to the image. In this paper, a semantic-meshed and content-guided transformer network is introduced for image captioning to solve these problems. The semantic-meshed mechanism allows the model to generate words by selecting semantic information of multiple interaction levels adaptively through attention-based reconstruction. And the content-guided module guides the words generation by using attribute features that represent the image content, which aims to keep the generated caption consistent with the main content of the image. Experiments on dataset on the MSCOCO captioning dataset are conducted to validate the authors’ model and achieve superior results compared to other state-of-the-art method approaches.
Original language | English (US) |
---|---|
Pages (from-to) | 431-444 |
Number of pages | 14 |
Journal | IET Computer Vision |
Volume | 16 |
Issue number | 5 |
DOIs | |
State | Published - Aug 1 2022 |
Externally published | Yes |
Bibliographical note
Generated from Scopus record by KAUST IRTS on 2023-09-21ASJC Scopus subject areas
- Software
- Computer Vision and Pattern Recognition