Unsupervised Domain Alignment Based Open Set Structural Recognition of Macromolecules Captured By Cryo-Electron Tomography

Yuchen Zeng, Gregory Howe, Xiangrui Zeng, Jing Zhang, Yi-Wei Chang, Min Xu

Research output: Chapter in Book/Report/Conference proceedingConference contribution

2 Scopus citations


Cellular cryo-Electron Tomography (cryo-ET) provides three-dimensional views of structural and spatial information of various macromolecules in cells in a near-native state. Subtomogram classification is a key step for recognizing and differentiating these macromolecular structures. In recent years, deep learning methods have been developed for high-throughput subtomogram classification tasks; however, conventional supervised deep learning methods cannot recognize macromolecular structural classes that do not exist in the training data. This imposes a major weakness since most native macromolecular structures in cells are unknown and consequently, cannot be included in the training data. Therefore, open set learning which can recognize unknown macro-molecular structures is necessary for boosting the power of automatic subtomogram classification. In this paper, we propose a method called Margin-based Loss for Unsupervised Domain Alignment (MLUDA) for open set recognition problems where only a few categories of interest are shared between cross-domain data. Through extensive experiments, we demonstrate that MLUDA performs well at cross-domain open-set classification on both public datasets and medical imaging datasets. So our method is of practical importance.
Original languageEnglish (US)
Title of host publication2021 IEEE International Conference on Image Processing (ICIP)
ISBN (Print)978-1-6654-3102-6
StatePublished - 2021
Externally publishedYes

Bibliographical note

KAUST Repository Item: Exported on 2022-03-01


Dive into the research topics of 'Unsupervised Domain Alignment Based Open Set Structural Recognition of Macromolecules Captured By Cryo-Electron Tomography'. Together they form a unique fingerprint.

Cite this