VAA: Visual aligning attention model for remote sensing image captioning

Zhengyuan Zhang, Wenkai Zhang, Wenhui Diao, Menglong Yan, Xin Gao, Xian Sun

Research output: Contribution to journalArticlepeer-review

28 Scopus citations


Owing to the effectiveness in selectively focusing on regions of interest of images, the attention mechanism has been widely used in image caption task, which can provide more accurate image information for training deep sequential models. Existing attention-based models typically rely on top-down attention mechanism. While somewhat effective, attention masks in these attention-based models are queried from image features by hidden states of LSTM, rather than optimized by the objective functions. This indirectly supervised training approach cannot ensure that attention layers accurately focus on regions of interest. To address the above issue, in this paper, a novel attention model, Visual Aligning Attention model (VAA), is proposed. In this model, the attention layer is optimized by a well-designed visual aligning loss during the training phase. The visual aligning loss is obtained by explicitly calculating the feature similarity of attended image features and corresponding word embedding vectors. Besides, in order to eliminate the influence of non-visual words in training the attention layer, a visual vocab used for filtering out non-visual words in sentences is proposed, which can neglect the non-visual words when calculating the visual aligning loss. Experiments on UCM-Captions and Sydney-Captions prove that the proposed method is more effective in remote sensing image caption task.
Original languageEnglish (US)
Pages (from-to)137355-137364
Number of pages10
JournalIEEE Access
StatePublished - Jan 1 2019
Externally publishedYes

Bibliographical note

Generated from Scopus record by KAUST IRTS on 2023-09-21

ASJC Scopus subject areas

  • General Engineering
  • General Computer Science
  • General Materials Science


Dive into the research topics of 'VAA: Visual aligning attention model for remote sensing image captioning'. Together they form a unique fingerprint.

Cite this