Deep Context-Encoding Network For Retinal Image Captioning

Jia-Hong Huang, Ting-Wei Wu, Chao-Han Huck Yang, Marcel Worring

Research output: Chapter in Book/Report/Conference proceedingConference contribution

19 Scopus citations

Abstract

Automatically generating medical reports for retinal images is one of the promising ways to help ophthalmologists reduce their workload and improve work efficiency. In this work, we propose a new context-driven encoding network to automatically generate medical reports for retinal images. The proposed model is mainly composed of a multi-modal input encoder and a fused-feature decoder. Our experimental results show that our proposed method is capable of effectively leveraging the interactive information between the input image and context, i.e., keywords in our case. The proposed method creates more accurate and meaningful reports for retinal images than baseline models and achieves state-of-the-art performance. This performance is shown in several commonly used metrics for the medical report generation task: BLEUavg (+16%), CIDEr (+10.2%), and ROUGE (+8.6%).
Original languageEnglish (US)
Title of host publication2021 IEEE International Conference on Image Processing (ICIP)
PublisherIEEE
DOIs
StatePublished - Aug 23 2021
Externally publishedYes

Bibliographical note

KAUST Repository Item: Exported on 2021-08-28
Acknowledgements: This work is supported by competitive research funding from King Abdullah University of Science and Technology (KAUST) and University of Amsterdam.
This publication acknowledges KAUST support, but has no KAUST affiliated authors.

Fingerprint

Dive into the research topics of 'Deep Context-Encoding Network For Retinal Image Captioning'. Together they form a unique fingerprint.

Cite this