Open access peer-reviewed chapter

Deep Learning in Medical Imaging

Written By

Narjes Benameur and Ramzi Mahmoudi

Submitted: 05 February 2023 Reviewed: 26 April 2023 Published: 30 May 2023

DOI: 10.5772/intechopen.111686

From the Edited Volume

Deep Learning and Reinforcement Learning

Edited by Jucheng Yang, Yarui Chen, Tingting Zhao, Yuan Wang and Xuran Pan

Chapter metrics overview

151 Chapter Downloads

View Full Metrics

Abstract

Medical image processing tools play an important role in clinical routine in helping doctors to establish whether a patient has or does not have a certain disease. To validate the diagnosis results, various clinical parameters must be defined. In this context, several algorithms and mathematical tools have been developed in the last two decades to extract accurate information from medical images or signals. Traditionally, the extraction of features using image processing from medical data are time-consuming which requires human interaction and expert validation. The segmentation of medical images, the classification of medical images, and the significance of deep learning-based algorithms in disease detection are all topics covered in this chapter.

Keywords

  • deep learning
  • medical imaging
  • segmentation
  • classification
  • diagnosis

1. Introduction

Recently, artificial intelligence (AI) is considered as a revolution across the medical field and one of the main factors of this AI revolution is deep learning (DL). The origin of DL and neural networks dates back to 1950. Yet, with the introduction of medically annotated big data, necessary for training, and the availability of high-performance computing, the recent years seem to mark a turning point for DL in medical imaging.

Accordingly, this branch of AI is recently applied to several healthcare problems such as computer-aided diagnosis, disease identification, image segmentation and classification, etc. Unlike classical tools, the powerful key of DL derives from the ability to automatically learn complex features without the need for human interaction. Nevertheless, many challenges still exist in medical health including privacy and heterogeneity of datasets. In this chapter, we will survey the application of DL in clinical imaging, and we will highlight the main challenges and future directions of this tool.

Advertisement

2. Deep learning-based segmentation in medical imaging

Deep learning algorithms were used in many medical applications to solve problems with segmentation, image classification, and pathology diagnosis. The manual segmentation process is time-consuming for radiologists because it is typically done slice by slice. Furthermore, segmentation results are susceptible to inter and interobserver variability. To address these limitations, several approaches based on active contour, level set, and statistical shape modeling [1, 2, 3] have been proposed to segment the extent of various pathologies or anatomical geometries. All of the methods mentioned above, however, are still semi-automated and require human interaction [4].

With the advent of DL, a fully automated segmentation of serial medical images is become possible in a few seconds. Several studies in the literature reported that segmentation algorithms based on AI outperformed the other classical models [5, 6]. Convolutional neural networks (CNNs) are the most used architecture to segment medical images. It consists of reducing the spatial dimensionality of the original image data through a series of the network layers by performing convolution and pooling operations. Other DL architectures were also proposed for this task such as deep neural network (DNN), artificial neural network (ANN), fully convolutional network (FCN), ResNet-50, and VGGNet-16 [7, 8, 9, 10]. Figure 1 describes the tasks involved in segmenting cardiac images for various imaging modalities.

Figure 1.

Overview of cardiac image segmentation tasks for different imaging modalities [11].

The success of DL-based medical image segmentation inspired other studies to reevaluate the traditional approaches to image segmentation and incorporate DL models into their work. Many factors have facilitated the increased use of DL. Among them, we can note the availability of medical data and the evolution of graphics processors’ performances.

Each year, large, annotated datasets were published online. These data were collected during many challenges such as medical segmentation decathlon and medical image computing and computer aided interventions (MICCAI). Table 1 summarizes the largest medical images datasets available online.

DatasetTargetModalitySource
KaggleVarious diseasesX-rays, MRI and CTGoogle LLC
https://www.kaggle.com/
NIH Image GalleryVarious diseasesX-rays, MRI, CT, PET.National Institutes of Health (NIH)
https://www.flickr.com/photos/nihgov/
ImageNetCancer, diabetes, and Alzheimer’s disease.Genomic dataAI researchers
https://www.image-net.org/
Google Dataset SearchVarious diseasesX-rays, MRI, CT, PET, echographyGoogle
https://datasetsearch.research.google.com/
UCI Machine Learning RepositoryVarious diseasesX-rays, MRI, CT, PET, echographyThe National Science Foundation
https://archive.ics.uci.edu/ml/index.php
Stanford Medical ImageNetVarious diseasesX-rays, MRI scans, and CT scans.Stanford University
https://aimi.stanford.edu/medical-imagenet
Open Images DatasetVarious diseasesAll medical Imaging techniquesGoogle in collaboration with CMU and Cornell Universities
https://storage.googleapis.com/openimages/web/index.html
Cancer Imaging ArchiveCancerImages from a variety of cancer typesNational Cancer Institute (NCI)
https://www.cancerimagingarchive.net/access-data/
Alzheimer’s Disease Neuroimaging InitiativeAlzheimer’s diseasebrain scans and related data from MRIFoundation for the National Institutes of Health
https://adni.loni.usc.edu/
The Microsoft COCO datasetVarious diseasesAll medical Imaging techniquesMicrosoft
https://cocodataset.org/#home
MIAS datasetVarious diseasesMammographic imagesOrganization of UK research groups
https://www.kaggle.com/datasets/kmader/mias-mammography

Table 1.

Medical images datasets available online.

Segmentation based on DL were applied in different field of medical imaging [12, 13, 14]. In cardiac MRI, several DL models were used to delineate the contours of the myocardium which represent a crucial step to compute useful clinical parameters for the evaluation of cardiac function [15]. DL was also applied for the segmentation of different types and stage of cancer. For breast cancer, the data include mammography, ultrasound, and MRI images [16, 17, 18]. Other DL architectures were also proposed in the literature to segment cervical cancer based on Magnetic Resonance Imaging (MRI), computed tomography (CT), and positron emission tomography (PET) scan data [19]. Zhao et al. [20] proposed a new model of DL that combined U-net with progressive growing of U-net+ (PGU-net) for automated segmentation of cervical nuclei. In their study, they reported a segmentation accuracy of 92.5%. Similarly, Liu et al. [21] applied a modified U-net model on CT images for clinical target volume delineation in cervical cancer. In their proposed architecture, the encoder and decoder components were replaced with dual path network (DPN) components. The mean dice similarity coefficient (DSC) and the Hausdorff distance (HD) values of the model were 0.88 and 3.46 mm.

Although image segmentation based on DL facilitates the detection, characterization, and analysis of different lesions in medical images, it still suffers from several limitations. First, the problem of missing border regions in medical images should be considered [22]. Furthermore, the imbalanced data available online could significantly affect the segmentation performances. In medical imaging, the collection of balanced data is challenging since images related to controls are largely available compared to those associated with different pathologies. As a result, some models have been proposed to mitigate this problem. These models include convolutional autoencoders [23] and generative adversarial networks (GAN) [24]. The concept is based on the extraction of information from original images and to generate a similar dataset image based on linear transformation, e.g., reflection, rotation, translation.

Advertisement

3. Deep learning-based classification in medical imaging

Additionally, DL have demonstrated its superiority in the categorization of medical images, more notably in the distinction of various disorders. The extraction of key features is a step in the classification process that produces a model that can categorize a picture into multiple classes. To extract features using color or texture, several classical classifications have been dressed in the literature [25, 26, 27]. Support vector machines (SVMs), logistic regression, closest neighbors, etc. can be mentioned among them. These systems must, however, cope with other challenging problems related to medical imaging. First, the presence of artifacts in medical images may make it more difficult to categorize. Because of this, pre-processing is crucial to improving image quality. The second problem is the complexity of medical content captured by many modalities. The classification of medical images is extremely difficult because each modality has distinct characteristics.

Recently, several researchers used DL for the medical classification task and the results proved the accuracy of their models in comparison with the traditional machine learning approaches [28]. Deep learning’s key benefit is its ability to quickly distinguish between various structures in images without the need for manual feature extraction. Recent DL architectures also have the capacity to incorporate a variety of features gathered from many modalities to produce an effective classifier.

Yadav and Jadhav [29] used a DL algorithm based on the transfer learning of VGG16 to classify pneumonia from chest X rays’ images. In their study, they showed that the VGG16 outperformed the classical method based on SVM. The accuracy was 0.923 for VGG16 vs. 0.776 for SVM. Similarly, Xu et al. [30] tested a deep CNN in histopathology images to extract new features for the classification of colon cancer. Lai et al. [31] proposed a new architecture that combines coding network with multilayer perceptron (CNMP) with other features extracted from deep CNN.

In their study, they showed an accuracy of 90.2%. Although DL achieved high performance in the classification of medical images, it still suffers from numerous limitations. The major challenge is the reduced number of annotated data needed for the classification of medical images. Labeling data require the intervention of experienced radiologists. A few solutions have been proposed to resolve this issue. Pujitha and Sivaswamy [32] proposed a crowd-sourcing and synthetic image generation for training deep neural net-based lesion detection. In their study, they used color fundus retinal and they proved that crowd-sourcing improves the area under the curve (AUC) by 25%. The generative adversarial networks (GAN) is also another source of generating synthetic images with annotations. Aljohan and Alharbe [33] proposed a new GAN to generate synthetic medical images with the corresponding annotations from different medical modalities. The classification of medical images based on DL has shown good results. However, there are still several issues in medical image processing that need to be addressed with the different DL architectures.

Advertisement

4. Disease diagnosis based on deep learning

Early and precise diagnosis is crucial for the treatment of different diseases and for the estimation of a severity grade. The use of DL for the diagnosis of diseases is a dynamic research area that attracts several researchers worldwide. In fact, DL architectures have been applied to some specific pathologies such as cancer, heart disease, diabetes, and Alzheimer’s disease [34, 35]. The increasing number of medical imaging dataset led different researchers to use deep learning models for the diagnosis of different diseases.

DL algorithms have proven their performances in the prediction and diagnosis of cancer diseases. The availability of images derived from MRI, CT, mammography, and biopsy helped several researchers to use these data for early cancer detection. The analysis of cancer images includes the detection of tumor area, the classification of different cancer stages, and the extraction of different characteristics for tumors [36].

Recently, Shen et al. [37] used a modified version of CNNs for the screening of breast cancer using mammography data. The outcomes of their study showed an AUC of 0.95 and a specificity of 96.1%. A CNN was also applied for the classifications of different kinds of cancer and the detection of carcinoma. Figure 2 depicts the entire image categorization process for breast cancer screening using DL architecture.

Figure 2.

Deep learning for the screening of breast cancer [37].

Alanazi et al. [38] applied the transfer DL model to detect brain tumor in the early stage by using various types of tumor data. Furthermore, another study used a 3D deep CNN to assess the glioma grade (low or high-grade glioma). In their study, they reported an accuracy of 96.49% [39]. Compared to the classical algorithms, the different studies proved the efficiency of DL in the prediction and analysis of cancer. However, bigger medical data available online are needed for more adequate validation.

Advertisement

5. Conclusion

As has been shown, using medical image processing techniques in clinical practice is crucial for determining if a patient has a particular disease or not. The field of medical imaging has been transformed by AI and DL, which enable more precise and automatic feature extraction from medical data. DL has been used to address a variety of healthcare issues, including image segmentation and classification, disease detection, computer-aided diagnosis, and the learning of complex features without human interaction. Despite the advances made, many challenges still exist in medical health including privacy and heterogeneity of datasets.

Advertisement

Conflict of interest

The authors declare no conflict of interest.

References

  1. 1. Pohl KM, Fisher J, Kikinis R, Grimson WEL, Wells WM. Shape based segmentation of anatomical structures in magnetic resonance images. Computer Visual Biomedical Image Application. 2005;3765:489-498. DOI: 10.1007/11569541_49
  2. 2. Chen X, Williams BM, Vallabhaneni SR, Czanner G, Williams R, Zheng Y. Learning active contour models for medical image segmentation. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Long Beach, CA, USA; 2019. pp. 11624-11632. DOI: 10.1109/CVPR.2019.01190
  3. 3. Swierczynski P, Papież BW, Schnabel JA, Macdonald C. A level-set approach to joint image segmentation and registration with application to CT lung imaging. Computerized Medical Imaging and Graphics. 2018;65:58-68
  4. 4. Gao Y, Tannenbaum A. Combining atlas and active contour for automatic 3d medical image segmentation. Proceedings of the IEEE International Symposium Biomedical Imaging. 2011;2011:1401-1404
  5. 5. Kim M, Yun J, Cho Y, Shin K, Jang R, Bae HJ, et al. Deep learning in medical imaging. Neurospine. 2019;16(4):657-668
  6. 6. Vaidyanathan A, van der Lubbe MFJA, Leijenaar RTH, van Hoof M, Zerka F, Miraglio B, et al. Deep learning for the fully automated segmentation of the inner ear on MRI. Scientific Reports. 2021;11(1):2885
  7. 7. Zadeh Shirazi A, McDonnell MD, Fornaciari E, Bagherian NS, Scheer KG, Samuel MS, Yaghoobi M, Ormsby RJ, Poonnoose S, Tumes DJ, Gomez GA. A deep convolutional neural network for segmentation of whole-slide pathology images identifies novel tumour cell-perivascular niche interactions that are associated with poor survival in glioblastoma.
  8. 8. Cai L, Gao J, Zhao D. A review of the application of deep learning in medical image classification and segmentation. Annals of Translational Medicine. 2020;8(11):713. DOI: 10.21037/atm.2020.02.44
  9. 9. Malhotra P, Gupta S, Koundal D, Zaguia A, Enbeyle W. Deep neural networks for medical image segmentation. Journal of Healthcare Engineering. 2022;200:9580991. DOI: 10.1155/2022/9580991
  10. 10. Alsubai S, Khan HU, Alqahtani A, Sha M, Abbas S, Mohammad UG. Ensemble deep learning for brain tumor detection. Frontiers in Computer Neuroscience. 2022;16:1005617. DOI: 10.3389/fncom.2022.1005617
  11. 11. Chen C, Qin C, Qiu H, Tarroni G, Duan J, Bai W, et al. Deep learning for cardiac image segmentation: A review. Frontiers in Cardiovascular Medicine. 2020;7:25. DOI: 10.3389/fcvm.2020.00025
  12. 12. Hesamian MH, Jia W, He X, Kennedy P. Deep learning techniques for medical image segmentation: Achievements and challenges. Journal of Digital Imaging. 2019;32(4):582-596. DOI: 10.1007/s10278-019-00227-x
  13. 13. Fu Y, Lei Y, Wang T, Curran WJ, Liu T, Yang X. A review of deep learning based methods for medical image multi-organ segmentation. Physica Medica. 2021;85:107-122
  14. 14. Bangalore Yogananda CG, Shah BR, Vejdani-Jahromi M, Nalawade SS, Murugesan GK, Yu FF, et al. A fully automated deep learning network for brain tumor segmentation. Tomography. 2020;6(2):186-193
  15. 15. Wang Y, Zhang Y, Wen Z, Tian B, Kao E, Liu X, et al. Deep learning based fully automatic segmentation of the left ventricular endocardium and epicardium from cardiac cine MRI. Quantitative Imaging in Medicine and Surgery. 2021;11(4):1600-1612
  16. 16. Abdelrahman A, Viriri S. Kidney tumor semantic segmentation using deep learning: A survey of state-of-the-art. Journal of Imaging. 2022;8(3):55
  17. 17. Yue W, Zhang H, Zhou J, Li G, Tang Z, Sun Z, et al. Deep learning-based automatic segmentation for size and volumetric measurement of breast cancer on magnetic resonance imaging. Frontiers in Oncology. 2022;12:984626
  18. 18. Caballo M, Pangallo DR, Mann RM, Sechopoulos I. Deep learning-based segmentation of breast masses in dedicated breast CT imaging: Radiomic feature stability between radiologists and artificial intelligence. Computers in Biology and Medicine. 2020;118:103629
  19. 19. Yang C, Qin LH, Xie YE, Liao JY. Deep learning in CT image segmentation of cervical cancer: A systematic review and meta-analysis. Radiation Oncology. 2022;17(1):175
  20. 20. Zhao Y, Rhee DJ, Cardenas C, Court LE, Yang J. Training deep-learning segmentation models from severely limited data. Medical Physics. 2021;48(4):1697-1706
  21. 21. Liu Z, Liu X, Guan H, Zhen H, Sun Y, Chen Q , et al. Development and validation of a deep learning algorithm for auto-delineation of clinical target volume and organs at risk in cervical cancer radiotherapy. Radiotherapy and Oncology. 2020;153:172-179
  22. 22. Zambrano-Vizuete M, Botto-Tobar M, Huerta-Suárez C, Paredes-Parada W, Patiño Pérez D, Ahanger TA, et al. Segmentation of medical image using novel dilated ghost deep learning model. Computational Intelligence and Neuroscience. 2022;2022:6872045
  23. 23. Gondara L. Medical image denoising using convolutional denoising autoencoders. In: 2016 IEEE 16th International Conference on Data Mining Workshops (ICDMW). Barcelona, Spain; 2016. pp. 241-246. DOI: 10.1109/ICDMW.2016.0041
  24. 24. Gulakala R, Markert B, Stoffel M. Generative adversarial network based data augmentation for CNN based detection of Covid-19. Scientific Reports. 2022;12:19186
  25. 25. Shukla P, Verma A, Verma S, Kumar M. Interpreting SVM for medical images using Quadtree. Multimedia Tools and Applications. 2020;79(39-40):29353-29373
  26. 26. Tchito Tchapga C, Mih TA, Tchagna Kouanou A, Fozin Fonzin T, Kuetche Fogang P, Mezatio BA, et al. Biomedical image classification in a big data architecture using machine learning algorithms. Journal of Healthcare Engineering. 2021;2021:9998819
  27. 27. Rashed BM, Popescu N. Critical analysis of the current medical image-based processing techniques for automatic disease evaluation: Systematic literature review. Sensors (Basel). 2022;22(18):7065
  28. 28. Puttagunta M, Ravi S. Medical image analysis based on deep learning approach. Multimedia Tools and Applications. 2021;80(16):24365-24398
  29. 29. Yadav SS, Jadhav SM. Deep convolutional neural network based medical image classification for disease diagnosis. Journal of Big Data. 2019;6:113
  30. 30. Xu Y, Jia Z, Wang LB, Ai Y, Zhang F, Lai M, et al. Large scale tissue histopathology image classification, segmentation, and visualization via deep convolutional activation features. BMC Bioinformatics. 2011;18(1):281
  31. 31. Lai Z, Deng H. Medical image classification based on deep features extracted by deep model and statistic feature fusion with multilayer perceptron. Computational Intelligence and Neuroscience. 2018;2018:2061516
  32. 32. Pujitha AK, Sivaswamy J. Solution to overcome the sparsity issue of annotated data in medical domain. CAAI Transactions on Intellectual Technology. 2018;3:153-160
  33. 33. Aljohani A, Alharbe N. Generating synthetic images for healthcare with novel deep Pix2Pix GAN. Electronics. 2022;11(21):3470. DOI: 10.3390/electronics11213470
  34. 34. Kumar Y, Koul A, Singla R, Ijaz MF. Artificial intelligence in disease diagnosis: A systematic literature review, synthesizing framework and future research agenda. Journal of Ambient Intelligence and Humanized Computing. 2022;2022:1-28
  35. 35. Ibrahim A, Mohamed HK, Maher A, Zhang B. A survey on human cancer categorization based on deep learning. Frontiers in Artificial Intelligence. 2022;5:884749. DOI: 10.3389/frai.2022.884749
  36. 36. Tran KA, Kondrashova O, Bradley A, et al. Deep learning in cancer diagnosis, prognosis and treatment selection. Genome Medicine. 2021;13:152. DOI: 10.1186/s13073-021-00968-x
  37. 37. Shen L, Margolies LR, Rothstein JH, et al. Deep learning to improve breast cancer detection on screening mammography. Scientific Reports. 2019;9:12495. DOI: 10.1038/s41598-019-48995-4
  38. 38. Alanazi MF, Ali MU, Hussain SJ, Zafar A, Mohatram M, Irfan M, et al. Brain tumor/mass classification framework using magnetic-resonance-imaging-based isolated and developed transfer deep-learning model. Sensors (Basel). 2022;22(1):372. DOI: 10.3390/s22010372
  39. 39. Mzoughi H, Njeh I, Wali A, Slima MB, BenHamida A, Mhiri C, et al. Deep multi-scale 3D Convolutional Neural Network (CNN) for MRI gliomas brain tumor classification. Digital Imaging. 2020;33(4):903-915. DOI: 10.1007/s10278-020-00347-9

Written By

Narjes Benameur and Ramzi Mahmoudi

Submitted: 05 February 2023 Reviewed: 26 April 2023 Published: 30 May 2023