DSpace Repository

Development of a domain agnostic Content-aware style-invariant Framework for disease detection From chest x-rays

Show simple item record

dc.contributor.advisor Haque, Dr. Md. Aynal
dc.contributor.author Zunaed, Mohammad
dc.date.accessioned 2024-06-26T04:18:12Z
dc.date.available 2024-06-26T04:18:12Z
dc.date.issued 2023-08-20
dc.identifier.uri http://lib.buet.ac.bd:8080/xmlui/handle/123456789/6756
dc.description.abstract Domain shift is a significant challenge for deep learning-based medical image analysis, particularly in the case of thoracic disease classification from chest X-rays (CXRs). Over the last decade, many domain adaptation (DA) and domain-generalization (DG) approaches for thoracic disease classification have been proposed. However, these methods do not explicitly regularize the content and style characteristics of the extracted domain-invariant features. Recent findings have shown that deep-learning models display a strong bias toward styles (i.e., uninformative textures) rather than content, in stark contrast to the human vision system. Therefore, domain-agnostic models for pathology diagnosis from CXR images should extract domain-invariant features that are style-invariant and content-biased. In this thesis, we propose a style randomization module (SRM) at the image level that randomly samples the style statistics parameters from a set constructed by the possible value range of a CXR image to create a more diversified augmented dataset compared to previous methods where available training data is utilized for sampling style parameters. We also employ an SRM at the feature level that uses learnable parameters as style embeddings to manipulate style while keeping the content intact. The two SRMs work together hierarchically to create rich, diversified, style-perturbed features on the fly during training. In addition, we utilize the Frobenius Norm loss between the global semantic features and Kullback-Leibler divergence loss between the predictive distributions, respectively, of two versions of the same CXR image (original CXR image and style-perturbed CXR image) to tweak the framework’s sensitivity toward content markers for accurate predictions. Extensive experiments with large-scale thoracic disease datasets demonstrate that our proposed pipeline is more robust in the presence of domain shift and achieves state-of-the-art performance. Our proposed method, trained on CheXpert and MIMIC-CXR datasets, achieves a mean percentage AUC score of 77.21±0.30, 87.74±0.48, 82.40±0.13 on the unseen domain test datasets, named BRAX, VinDr-CXR, and NIH chest X-ray14 respectively, compared to 75.78±0.45, 87.24±0.57, 82.16±0.21 from the former best models on five-fold cross-validation with statistically significant results for the thoracic disease classification task. en_US
dc.language.iso en en_US
dc.publisher Department of Electrical and Electronic Engineering (EEE), BUET. en_US
dc.subject Imaging processing - Digital electronics en_US
dc.title Development of a domain agnostic Content-aware style-invariant Framework for disease detection From chest x-rays en_US
dc.type Thesis-MSc en_US
dc.contributor.id 0419062239 en_US
dc.identifier.accessionNumber 119585
dc.contributor.callno 623.81542/ZUN/2023 en_US


Files in this item

Files Size Format View

There are no files associated with this item.

This item appears in the following Collection(s)

Show simple item record

Search BUET IR


Advanced Search

Browse

My Account