Abstract
Over the last decade, convolutional neural networks (CNNs) have emerged as the leading algorithms in image classification and segmentation. Recent publication of large medical imaging databases have accelerated their use in the biomedical arena. While training data for photograph classification benefits from aggressive geometric augmentation, medical diagnosis - especially in chest radiographs - depends more strongly on feature location. Diagnosis classification results may be artificially enhanced by reliance on radiographic annotations. This work introduces a general pre-processing step for chest x-ray input into machine learning algorithms. A modified Y-Net architecture based on the VGG11 encoder is used to simultaneously learn geometric orientation (similarity transform parameters) of the chest and segmentation of radiographic annotations. Chest x-rays were obtained from published databases. The algorithm was trained with 1000 manually labeled images with augmentation. Results were evaluated by expert clinicians, with acceptable geometry in 95.8% and annotation mask in 96.2% (n = 500), compared to 27.0% and 34.9% respectively in control images (n = 241). We hypothesize that this pre-processing step will improve robustness in future diagnostic algorithms.Clinical relevance - This work demonstrates a universal pre-processing step for chest radiographs - both normalizing geometry and masking radiographic annotations - for use prior to further analysis.
Original language | English (US) |
---|---|
Title of host publication | Proceedings of the Annual International Conference of the IEEE Engineering in Medicine and Biology Society, EMBS |
Publisher | Institute of Electrical and Electronics Engineers Inc. |
Pages | 1266-1269 |
Number of pages | 4 |
ISBN (Print) | 9781728119908 |
DOIs | |
State | Published - Jul 1 2020 |
Externally published | Yes |