DSpace Repository

Multimodal feature fusion based thoracic disease classification framework combining medical data and chest x-ray images

Show simple item record

dc.contributor.advisor Al Banna, Dr. Taufiq Hasan
dc.contributor.author Nusrat Binta Nizam
dc.date.accessioned 2024-04-22T06:01:42Z
dc.date.available 2024-04-22T06:01:42Z
dc.date.issued 2023-06-19
dc.identifier.uri http://lib.buet.ac.bd:8080/xmlui/handle/123456789/6733
dc.description.abstract Chest X-rays are commonly used in clinical settings to diagnose thoracic diseases, es- pecially in low-resource settings. However, interpreting these images can be challeng- ing, particularly in resource-constrained environment. Current AI-based methods focus solely on the X-ray images without considering relevant clinical information. To effec- tively assist with limited resources, it is important for a computerized system to gen- erate decisions relevant to those of radiologists. This requires incorporating pertinent clinical details, such as medical history, symptoms, and demographic information, into image-based computerized systems to enhance their performance. The development of AI-based systems faces two main challenges: the limited availability of comprehensive medical image datasets suitable for machine learning and the difficulty in reproducing the advanced reasoning abilities of experienced radiologists, who have undergone ex- tensive training and accumulated expertise. In this work, at first an unimodal anatomy aware network is proposed which provided about 11% relative improvement in mean square error (MSE) compared to existing methods when evaluated on a dataset for pre- dicting the severity of COVID-19 pneumonia. This model also exhibits promising re- sults on an unseen clinical evaluation dataset which provides evidence of the efficacy of anatomy-aware architecture for predicting the severity of COVID-19 disease. Addi- tionally, this thesis proposes a multimodal feature fusion framework to improve disease classification by combining medical data and image information. Existing approaches rely on textual information, lacking anatomical details. An advanced multimodal fea- ture fusion-based approach is needed to enhance disease classification accuracy. In this study, a comparison of incorporating clinical information demonstrates the substantial value of patient indication data (i.e., medical history, demographics, symptoms) in dis- ease classification. Incorporating such information enables computer-aided systems to function more closely to radiologists. The proposed feature fusion-based framework, ResVCBERT and DenseVCBERT exhibit a significant improvement in accuracy com- pared to baseline architectures, even when there are errors in the textual information. The proposed DenseVCBERT provided significant improvement with an accuracy of about 88.44% using the OpenI dataset of radiological reports and chest X-rays. Includ- ing anatomical information in deep learning models through feature fusion enhances the accuracy of AI-based frameworks, as demonstrated in the analysis of COVID-19 pneu- monia severity prediction. This approach aids disease diagnosis and severity prediction, benefiting radiologists in developed and underdeveloped nations. en_US
dc.language.iso en en_US
dc.publisher Department of Biomedical Engineering(BME), BUET en_US
dc.subject Medical informatics en_US
dc.title Multimodal feature fusion based thoracic disease classification framework combining medical data and chest x-ray images en_US
dc.type Thesis-MSc en_US
dc.contributor.id 0421182003 en_US
dc.identifier.accessionNumber 119473
dc.contributor.callno 610.28/NUS/2023 en_US


Files in this item

This item appears in the following Collection(s)

Show simple item record

Search BUET IR


Advanced Search

Browse

My Account