dc.description.abstract |
Ultrasound imaging provides a convenient and easily accessible means for breast cancer detection. Quasi-static Elastography is a very useful imaging modality which can be combined with conventional B-mode imaging to implement a non-invasive lesion classification system. Computer Aided Diagnosis (CADx) can provide an objective opinion alongside the radiologist's diagnosis to increase the reliability of such a system. Traditionally, CADx based systems have relied on statistical features derived from the morphology and/or texture of the lesions which are fitted to a machine learning model– to classify the lesions into either malignant or benign category. The performance of this approach is highly dependent on the selection of an appropriate set of features which is found to be a difficult task. The segmentation process required for feature extraction is time-consuming and introduces subjectivity in the classification process.
Although a Computer Aided Diagnosis system based on object recognition techniques by deep Convolutional Neural Networks (CNN) holds the possibility of real-time lesion classification directly from images, this approach faces the difficulty of gathering enough data for training such a network from scratch. In this work, we investigated the use of transfer learning to alleviate this difficulty. We show that a CNN trained on ImageNet can be used as a starting point to design a deep CNN which can be trained easily on a small dataset of lesions. Also, we integrate both ultrasound B-mode and elastography images in a single unified network for lesion classification that can be trained end-to-end. On a dataset of 217 clinically proven cases, our approach achieves >91% accuracy, >88% sensitivity and >92\% specificity.
Apart from achieving satisfactory classification performance on our dataset, the proposed method shows indications of improvement with increasing dataset size. This approach, which is based on transfer learning, is applicable to a dataset of any reasonable size and also maintains the scalability and flexibility of deep learning. Furthermore, this method is completely objective, requires no segmentation of the lesion or ROI selection and is suitable for a real-time classification system. Additionally, we show that classification results can be further improved by multi-task learning of relevant tasks or inclusion of additional qualitative features of the lesions. |
en_US |