DSpace Repository

Towards organ separation from projection radiographs using a semi-supervised deep learning-based technique

Show simple item record

dc.contributor.advisor Banna, Dr. Taufiq Hasan Al
dc.contributor.author Kawsar Ahmed, Md.
dc.date.accessioned 2025-02-18T08:51:58Z
dc.date.available 2025-02-18T08:51:58Z
dc.date.issued 2024-03-25
dc.identifier.uri http://lib.buet.ac.bd:8080/xmlui/handle/123456789/6965
dc.description.abstract The objective of this thesis is to develop a novel deep-learning method to separate organ-specific tissue images from projection radiographs, facilitating improved disease diagnosis by providing focused diagnostic information alongside conventional radiographs. This study proposes OrGAN, a generative adversarial network based model to translate chest X-rays into lung tissue images. It consists of a U-Net generator modified with a domain classifier and gradient reversal layer for domain adaptation, and a CNN discriminator. OrGAN was trained on 779 paired synthetic X-ray/lung images generated from CT data, alongside 15,000 unpaired real X-ray images from the VinDr-CXR dataset. Qualitative evaluation involved radiologist assessment of visual quality, anatomical accuracy and diagnostic utility on 10 test cases. Quantitative assessment compared lung disease classification performance using deep learning models trained on generated lung tissues, segmented lungs, and original X-rays across two independent datasets - VinDr-CXR (6 labels) and COVIDx-CXR-4 (COVID vs normal). OrGAN achieved high performance in generating lung tissue images on synthetic X-ray data (PSNR 28.8dB, SSIM 0.944 on test set). Qualitative evaluation by 5 radiologists indicated that the generated images enhanced visibility of lung features, preserved diagnostic information, and complemented X-rays for improved diagnosis. In quantitative tests, the DenseNet model trained on lung tissues outperformed those trained on segmented lungs, and actual X-ray, showing statistically significant (p-value < 0.05) improvements in overall F1-score, and sensitivity for a multilabel (6 labels) disease classification task on the VinDr dataset. On COVIDx, both of the two classifier models (DenseNet and ResNet) achieved higher overall F1-scores for a multiclass (2 classes) disease classification task, compared to using original X-rays or segmented lungs. The proposed OrGAN model effectively separates interpretable lung tissue images from X-ray projections. The generated images retain diagnostic fidelity, offer additional diagnostic insights to complement conventional X-rays, and can improve performance of deep learning models for precise disease diagnosis. Thus, the study proposes a potential pathway for further exploration into organ-specific tissue image separation from projection radiographs. en_US
dc.language.iso en en_US
dc.publisher Department of Biomedical Engineering (BME), BUET en_US
dc.subject Diagnostic imaging-Digital techniques en_US
dc.title Towards organ separation from projection radiographs using a semi-supervised deep learning-based technique en_US
dc.type Thesis-MSc en_US
dc.contributor.id 0421182001 en_US
dc.identifier.accessionNumber 119747
dc.contributor.callno 616.0754/KAW/2024 en_US


Files in this item

Files Size Format View

There are no files associated with this item.

This item appears in the following Collection(s)

Show simple item record

Search BUET IR


Advanced Search

Browse

My Account