dc.description.abstract |
The objective of this thesis is to develop a novel deep-learning method to separate organ-specific tissue images from projection radiographs, facilitating improved disease diagnosis by providing focused diagnostic information alongside conventional radiographs. This study proposes OrGAN, a generative adversarial network based model to translate chest X-rays into lung tissue images. It consists of a U-Net generator modified with a domain classifier and gradient reversal layer for domain adaptation, and a CNN discriminator. OrGAN was trained on 779 paired synthetic X-ray/lung images generated from CT data, alongside 15,000 unpaired real X-ray images from the VinDr-CXR dataset. Qualitative evaluation involved radiologist assessment of visual quality, anatomical accuracy and diagnostic utility on 10 test cases. Quantitative assessment compared lung disease classification performance using deep learning models trained on generated lung tissues, segmented lungs, and original X-rays across two independent datasets - VinDr-CXR (6 labels) and COVIDx-CXR-4 (COVID vs normal). OrGAN achieved high performance in generating lung tissue images on synthetic X-ray data (PSNR 28.8dB, SSIM 0.944 on test set). Qualitative evaluation by 5 radiologists indicated that the generated images enhanced visibility of lung features, preserved diagnostic information, and complemented X-rays for improved diagnosis. In quantitative tests, the DenseNet model trained on lung tissues outperformed those trained on segmented lungs, and actual X-ray, showing statistically significant (p-value < 0.05) improvements in overall F1-score, and sensitivity for a multilabel (6 labels) disease classification task on the VinDr dataset. On COVIDx, both of the two classifier models (DenseNet and ResNet) achieved higher overall F1-scores for a multiclass (2 classes) disease classification task, compared to using original X-rays or segmented lungs. The proposed OrGAN model effectively separates interpretable lung tissue images from X-ray projections. The generated images retain diagnostic fidelity, offer additional diagnostic insights to complement conventional X-rays, and can improve performance of deep learning models for precise disease diagnosis. Thus, the study proposes a potential pathway for further exploration into organ-specific tissue image separation from projection radiographs. |
en_US |