Skin Lesion Segmentation Based on Vision Transformers and Convolutional Neural Networks—A Comparative Study

Yonis Gulzar, Sumeer Ahmad Khan

Research output: Contribution to journalArticlepeer-review

58 Scopus citations


Melanoma skin cancer is considered as one of the most common diseases in the world. Detecting such diseases at early stage is important to saving lives. During medical examinations, it is not an easy task to visually inspect such lesions, as there are similarities between lesions. Technological advances in the form of deep learning methods have been used for diagnosing skin lesions. Over the last decade, deep learning, especially CNN (convolutional neural networks), has been found one of the promising methods to achieve state-of-art results in a variety of medical imaging applications. However, ConvNets’ capabilities are considered limited due to the lack of understanding of long-range spatial relations in images. The recently proposed Vision Transformer (ViT) for image classification employs a purely self-attention-based model that learns long-range spatial relations to focus on the image’s relevant parts. To achieve better performance, existing transformer-based network architectures require large-scale datasets. However, because medical imaging datasets are small, applying pure transformers to medical image analysis is difficult. ViT emphasizes the low-resolution features, claiming that the successive downsampling results in a lack of detailed localization information, rendering it unsuitable for skin lesion image classification. To improve the recovery of detailed localization information, several ViT-based image segmentation methods have recently been combined with ConvNets in the natural image domain. This study provides a comprehensive comparative study of U-Net and attention-based methods for skin lesion image segmentation, which will assist in the diagnosis of skin lesions. The results show that the hybrid TransUNet, with an accuracy of 92.11% and dice coefficient of 89.84%, outperforms other benchmarking methods.
Original languageEnglish (US)
Pages (from-to)5990
JournalApplied Sciences
Issue number12
StatePublished - Jun 12 2022

Bibliographical note

KAUST Repository Item: Exported on 2022-06-20
Acknowledgements: This work was supported by the Deanship of Scientific Research, Vice Presidency for Graduate Studies and Scientific Research, King Faisal University, Saudi Arabia, Project No. GRANT382.


Dive into the research topics of 'Skin Lesion Segmentation Based on Vision Transformers and Convolutional Neural Networks—A Comparative Study'. Together they form a unique fingerprint.

Cite this