Open access peer-reviewed chapter

The New Landscape of Diagnostic Imaging with the Incorporation of Computer Vision

Written By

Manuel Cossio

Reviewed: 23 January 2023 Published: 15 February 2023

DOI: 10.5772/intechopen.110133

Chapter metrics overview

108 Chapter Downloads

View Full Metrics

Abstract

Diagnostic medical imaging is a key tool in medical care. In recent years, thanks to advances in computer vision research, a subfield of artificial intelligence, it has become possible to use medical imaging to train and test machine learning models. Among the algorithms investigated, there has been a boom in the use of neural networks since they allow a higher level of automation in the learning process. The areas of medical imaging that have developed the most applications are X-rays, computed tomography, positron emission tomography, magnetic resonance imaging, ultrasonography and pathology. In fact, the COVID-19 pandemic has reshaped the research landscape, especially for radiological and resonance imaging. Notwithstanding the great progress that has been observed in the field, obstacles have also arisen that had to be overcome to continue to improve applications. These obstacles include data protection and the expansion of available datasets, which involves a large investment of resources, time and academically trained manpower.

Keywords

  • artificial intelligence
  • computer vision
  • healthcare
  • deep learning
  • diagnostic imaging

1. Introduction

A large part of diagnosis in various specialties of medical care relies heavily on image analysis. Depending on the type of technique used, more or less detail of the structures of interest can be obtained. It will also depend on the type of technique, whether the image is in two dimensions or if there are several slices that then can form a three-dimensional reconstruction [1]. Some specific techniques can also produce video output [2]. All of these different formats can be adapted for use as training and testing material for computer vision models. Computer vision, a subfield of artificial intelligence (AI), comprises all those techniques that allow a computer system to understand an image or a set of images and produce as a result a numerical or symbolic output. This output can be used to make a decision about the image [3]. When these models are applied to healthcare images, the output can be used to make a clinical decision [4]. Within computer vision algorithms, we have those that are handcrafted, where a person analyzes the set of images to be classified and chooses the features to be extracted from those images. For example, if we want to classify cardboard boxes in a scene, the person will probably choose to detect the edges of the box and the texture of the cardboard, as a first step [3]. Now, thanks to advances in research in this area, neural networks can also be used. A neural network is a computational algorithm composed of a set of interconnected nodes called artificial neurons, which are similar in function to neurons in the human nervous system. A neuron in this network receives information from the preceding neuron, processes it, and transmits it to other neurons. Networks can be simple, with very few layers of neurons, or complex, with many layers and many interconnections [5]. These models have the advantage that the determination of the features is automatic and does not need to be handcrafted. However, neural networks require a large amount of data to be able to perform accurate feature extraction with minimal error. In addition, as the number of connections between its different neurons is very high, it becomes complex to elucidate which features have been selected to produce an output from an initial image [3, 5, 6]. In the following chapter, we will discuss the most common AI applications of neural networks as computer vision models in the clinical medical field. In addition, we will analyze the different obstacles that the field of AI has encountered in its development along with the advancement that these vision applications have brought to the medical field.

Advertisement

2. Methods

A targeted review of the literature was carried out using the criteria “AI,” “Computer Vision,” and “Medical Imaging.” The databases consulted were PUBMED and Google Scholar until January 2022, selecting only articles in English. Our initial search revealed 860 articles of which a subgroup of 130 was selected. The inclusion criteria focused on the quality of the research, the robustness of the models, the transfer to the clinical setting, and the optimization of the parameters for the rational use of resources.

Advertisement

3. Computer vision with neural networks

Computer vision (CV) and AI research have several decades of steady progress. Specifically, the part of this discipline that uses convolutional neural networks (CNN) for image processing had its first boom with handwritten digit identification in 1989. This application was developed by Yann LeCun using some of the insights previously proposed by Kunihiko Fukushima [5]. Since computational capabilities at that time were scarce, there was little research in this area between 1990 and 2000. Thanks to the progressive increase in processing and storage capacities, in 2012 the AlexNet model was tested in competition with great success and from then on, the field of computer vision began to be populated with numerous applications [7]. The applications varied according to the type of task required and the type of dataset used. Also at this time, several authors began to investigate further the structure of the different published models and started to work on their taxonomy. Thus, we have articles that examined the components of various CNNs and their interconnection [8] and others that analyzed the different architectures, their engineering challenges, and their possible future applications [4, 9].

Advertisement

4. Healthcare computer vision

The advancement of computer vision in the field of medical imaging awakened in the late 2000s [4]. As partially mentioned earlier, the advances were made possible by advances in deep learning (DL) research, increased local processing capabilities with graphic processing units (GPUs), and the creation of medical image datasets [10]. The creation of larger and more complete datasets was mainly due to the increasing digitization of medical records in several countries. These electronic health records (EHR) are able to store, in addition to the images that will constitute the raw material, the labels that will be used to guide the training of the models [11]. These EHRs started out as a tool to generate billing codes for different medical practices. Then they changed their use, becoming digital support for clinical practice [12]. This change allowed its adoption not only in institutions or networks of institutions but also in entire regions and countries [12, 13]. The extension of the coverage territory allowed to expand even more the image datasets and included more patient variability, which is key to obtain models with wide generalization power.

Advertisement

5. Operation of computer vision algorithms

When applying AI models, specifically computer vision models to different types of medical images, we can perform different tasks. According to Huo et al., these tasks can be classified into four categories [14]. The first one is classification, in which the input is an image and the output is a label. The label can be numerical (e.g., 1, 2) or it can be text (e.g., cancerous, noncancerous) [14, 15]. The second is detection, which consists of the identification of an object in the image by means of a bounding box. This task offers an extra degree of information since in addition to locating the object it can inform about its position by means of coordinates in the input image [14, 16, 17]. The third is segmentation, which provides the highest degree of information about an image. In this task, each pixel receives a label, and the final result is a mask that groups several pixels. This enables the segmentation of precise structures within medical images, such as glomeruli or metastatic zones in pathology slides, or entire organs, such as the bladder in CT images [14, 18, 19, 20]. The last task is synthesis. It consists of generating images from noise or other images. For this, two different models work antagonistically, one generates the images de novo from available data and the other model tries to discriminate this artificially generated image from a real image. With each iteration of the process, both the generator and the discriminator become more efficient, which produces images with high similarity to the real ones [14, 21]. This task allows for example to generate of more training samples to populate datasets and thus, to achieve models with more generalization power [22, 23].

Advertisement

6. Transfer learning and data augmentation

As neural networks increase their number of layers and the connections between them, their complexity increases. Neural networks with many layers have demonstrated more than satisfactory performance in several tasks, many of them superior to human performance [24]. However, when working with these complex networks it is necessary to have a large amount of data for training, to avoid overfitting, and to expand the power of generalization. The use of networks with few layers trained on small datasets has also been researched, which has shown that there is a tendency to overfitting or underfitting [25]. In the AI medical field, it is very difficult to have very large datasets, as this demands a lot of specialized manpower. Specialized manpower (doctors, biologists, geneticists, etc.) is the one that analyzes the data and aggregates the labels to train the algorithms [25, 26]. Therefore, two solutions have been found to deal with this problem of small datasets. The first one consists of data augmentation. This group of techniques creates images in virtual form from the original images of the dataset. For example, you can alter the position of the images, rotate them on their axis, and change the contrast and brightness, just to mention a few [25, 27]. The second solution is transfer learning. This technique consists of training a complex network on a massive dataset (ImageNet) usually of common images (dogs, cats, etc.), and then performing a finetuning. The finetunning is the training with the specific medical dataset, which only alters the weights of the last layers of the neural network. This helps to obtain better results than training the network from scratch on the specific dataset [25, 28].

Advertisement

7. Model performance evaluation

Being able to measure the performance of our models is crucial to be able to evaluate their suitability for different tasks. Also, when performing finetunning, it is important to be able to have performance measures to know which parameters promote the best results. First of all, every time we test a model, we will have part of a dataset (in this particular case, images) that already has the labels assigned to it. The assignment of labels is done by the medical professionals specialized in the pathology being worked on. When the model processes the samples and predicts the new labels, these are compared with the original ones (called ground truth). With the result of the comparison, what is called a confusion matrix is constructed [29, 30]. This structure contains the true positives (TP) and true negatives (TN) and false positives (FP) and false negatives (FN). A TP or TN is established when the prediction and the ground truth are the same for a given sample (e.g., is a TN when the model predicted negative and the image was negative). On the contrary, a FP or FN is established when there is no coincidence between the model and the ground truth (e.g., the model predicted negative and the ground truth indicated a positive sample, therefore, the sample is a FN) [29]. Almost all the other global metrics that are usually reported in the different publications are derived from the four previous metrics. For example, the accuracy of a model corresponds to the number of samples correctly predicted by the model over the total number of samples. Then, considering the previous metrics, the correctly predicted samples would be included in the sum of the TP and TN. Additionally, the total number of samples would not be more than the sum of the TP, TN, FP, and FN [29].

Advertisement

8. Healthcare applications of AI and computer vision

8.1 X-ray imaging

Medical X-ray imaging consists of the emission of these rays by a transmitter that passes through the area of the patient. According to the radiographic density (depending on the density of the tissue and the atomic number of its components), the structures in the area will absorb the rays differentially, which will result in lights and shadows [31]. In the AI field of computer vision applied to X-ray, there is a preponderance of work in the area of the thoracic cavity [32]. Thus, we found work focused on the detection of pulmonary nodules with models trained on images from one pool of patients and tested in a different pool. We also found longitudinal work, where the model was trained and tested on images from the same patients, with images separated by a time window [32, 33, 34]. Another large part of the work focused on the detection of pneumonia. Several models were trained on datasets from different hospitals, which showed variations in various image features between hospitals. As expected, the models showed better metrics when trained and tested on data from the same hospital [32, 35, 36, 37]. With the advent of COVID-19, there was an explosion of research in the detection of this pathology in X-ray images. Thus, numerous models were created that attempted to distinguish COVID-19 pneumonia from viral or bacterial pneumonia. These developments were key since they allowed screening and managing patients automatically and to avoid spreading the contagion of COVID-19 patients [32, 38, 39, 40, 41]. Work was also carried out to contribute to the detection of tuberculosis in chest images. These models demonstrated satisfactory performance in screening tuberculosis images with respect to normal lungs or other pulmonary pathologies. However, the models did not show the ability to distinguish between active and quiescent disease [32, 42, 43]. Additionally, part of the research was also directed to the detection of pneumothorax. This part of the development was of important value in patient triaging, especially in determining the size and position of the pneumothorax and its changes over time in the same patient. Several of these models have already received FDA clearance as assistive devices in the emergency unit [32, 44, 45, 46]. As a final part of this section, to a lesser extent than the previous ones, models were also built for the detection of other types of pulmonary involvement, such as consolidation, edema, emphysema, fibrosis, and pleural effusion [32, 47].

8.2 Computed tomography

Computed tomography (CT) integrates many X-ray images taken from different angles thanks to the high-speed rotating platform that rotates on the same axis where the patient lies. The type of images it produces is cross-sectional [1]. Using AI computer vision techniques, it is possible to operate directly on a fixed plane (one section) or to use complete volumes (several consecutive sections). Most of the research in this area is classification (about 36%), followed by segmentation (27%), detection (22%), and others (15%) [30]. Broadly speaking we can list the works in this area in the identification of organs (kidney [48], liver [49, 50], lungs [51, 52], and heart [53, 54]) and in the identification of substructures or lesions (artery calcification [55], nodules [56], polyps [57, 58], and lymph nodes [59, 60, 61]). Among the most commonly used measures to report the performance of the different models are accuracy, sensitivity, specificity, AUC-ROC, and F1 score [1]. The processing of the images as input is also diverse. It is possible to use 3-dimensional inputs, that is, several consecutive slices that form a volume. Projection methods, such as maximum intensity projection, can also be used to transform a 3-dimensional input into a 2-dimensional one [1, 62].

8.3 Positron emission tomography

Positron emission tomography (PET) is a technique that allows the observation of metabolic processes in different tissues of the patient’s body. Radiolabeled compounds that follow a specific metabolic pathway are injected, the radiation is detected by sensors and then the complete image is reconstructed with the areas of highest activity [1]. 18F-fluorodeoxyglucose (FDG) is one of the most widely used radioactive substrates as a marker in PET [1, 63]. Among the applications of AI computer vision to this medical imaging modality, we have the segmentation of tumor areas in the brain [64], heart [65], head and neck [66], and nasopharynx [67] to adjust the dose and position of the radiotherapy intervention. With respect to classification tasks, work has been published on esophageal cancer [68], Alzheimer’s disease typing [69], and Hodgkin’s lymphoma [1, 70].

8.4 Magnetic resonance imaging

Magnetic resonance imaging (MRI) is a technique that uses high-intensity electromagnetic fields and radiofrequency waves to detect changes in the rotational axis of protons, mostly in water molecules. Water makes up almost all the tissues in the body and the difference in the percentage of water influences the axis changes. Deep learning applications in the field of MRI can be grouped into two broad categories. The first is related to the physical aspects and the generation of images on the device. In this category, you can find works that focus on image restoration, image reconstruction, and multimodal image registration [71]. The second category emphasizes applications for medical purposes, in which the determination of pathology or its progress is the main purpose [71, 72, 73, 74]. Focusing on the second category, we find works on brain aging [75], brain vascular lesions [76], Alzheimer’s disease [77], multiple sclerosis [78], glioma [79], and meningioma [80]. In the abdominal cavity, we find works of identification and segmentation of organs [81], polycystic kidneys [82], and renal transplantation [83]. Finally, isolating the spine as the focus of the study, we found works on labeling and separation of vertebrae [84], spinal stenosis grading [85], and identification and segmentation of spinal metastasis [86]. It is important to mention that organ segmentation is a very important focus in deep learning applications for MRI images. With the definition of organ contours in each plane (slice), the determination of the organ coordinates and the addition of consecutive areas, volumes can be calculated. The calculation of volumes is of crucial importance since they can be used to determine the dilation of organs (e.g., splenomegaly). The measurement of dilation is not only an important initial measurement. Thanks to the volumetric determination, it is possible to follow up on patients to observe the efficiency of treatments [81].

8.5 Ultrasonography

Ultrasonography (US) consists of the use of ultrasound (usually at a frequency greater than 20,000 hz) to form images of the inner regions of the organism. To do this, a probe emits waves and they bounce back at different speeds according to the type of tissue [87]. From this technique, we can count on two different outputs. One is an image (frame) where the structure of medical interest is located. The other is a complete video where we can visualize, for example, blood flow or muscle contraction. Within the research in AI computer vision applied to US, most of the works include the analysis of individual frames. In this way, frames can be produced directly from the device or they can be extracted from ultrasound videos. When extracting frames from videos, the regions of interest for the specific task is usually timely located and the rest are discarded [88]. Other less common and more integrative methodologies can use videos directly as input. They produce the division into frames, use a model (CNN) to extract features from each frame, and then integrate all the extracted features with a recurrent model (e.g., long short-term memory network) following a timeline [2]. Focusing on applications, in those works that performed classification we found the study of breast lesions [88, 89, 90], thyroid nodules [88, 91], liver fibrosis [88, 92], and focal liver disease [88, 93]. Regarding the detection of lesions, some works focused on papillary thyroid carcinoma [94] and breast cancer [95]. Continuing in the detection task, but moving from lesions to the detection of the fetal standard plan, several papers proposed different methodologies [88, 96, 97]. These works constituted important pillars for the improvement of automatic guidance tools in the fetal US that could be embedded in image production software. Finally, in the segmentation task, several works have been registered with approaches in areas similar to those mentioned above, such as breast lesions [88, 98] and lymph node contouring [88, 99, 100]. However, in this part, there is also an application that has several works and that has an important diagnostic value in the clinical setting. This application is the detection of atheroma plaques in the carotid artery and the automation of this process would allow screening and prevention in a faster and more cost-effective way [101, 102]. In fact, a multicenter clinical study has already been published to evaluate the feasibility of the technique [102].

8.6 Computational pathology

Classical pathology consists, very briefly, of the preservation, treatment, and staining of very small portions of tissue in slides. Stains can be standard ones, which highlight general structures, such as nuclei or cytoplasm, or immunohistochemical stains, in which specific cellular markers are targeted [103]. Thanks to advances in storage capabilities and the availability of cloud computing, the last few years have seen a migration from direct microscopic observation of stained tissues to the digitization of slides. Digital slides are stored in a specific file type called whole slide image (WSI), where it is possible to store the different magnification planes with very high compression. The scanning of the slides and the production of WSI for different uses, such as telepathology, constitute a branch of pathology called digital pathology [30, 104]. In addition, the increasing production and cataloging of WSIs for the diagnosis of different diseases made it possible to use them as training and testing materials for computer vision algorithms. This application of algorithms in WSIs has been called computational pathology and most of the published works use deep learning as a basis for different tasks. In a very general manner, one could describe the process of creating a computational pathology pipeline for any disease. Once the WSIs of the pathology to be studied are available, the final magnification to work with must be selected (20×, 40×) and consecutive patches of the different zones (disease and healthy tissue) must be generated [30]. The patches are generated due to the large size of the WSIs (the highest magnification can exceed 3e10 pixels). Consequently, the patches are used as input to the model and the model will learn, according to the task, to identify tumor and non-tumor zones [30]. In test WSIs, the same technique can be used to generate patches, process them with the model and then reconstruct the final image with a heat map. The heat map will identify the regions with the highest probability of belonging to a class (healthy or tumor). Jiang et al. categorize the implementation of computational pathology in oncology into five purposes, which are tumor diagnosis, subtyping, grading, staging, and prognosis [30]. Thus, we can find applications of these five purposes for breast cancer [30, 105, 106, 107, 108], lung cancer [30, 109, 110, 111], colorectal cancer [30, 112, 113, 114, 115], gastric cancer [30, 116, 117], prostate cancer [30, 118, 119], and thyroid cancer [30, 120, 121]. Another set of applications of computational pathology lies in the automatic analysis for the identification of rejection in organ transplantation. Several papers have been published for kidney [122, 123] and heart [124] transplantation.

Advertisement

9. COVID-19 research landscape remodeling

The COVID-19 pandemic created a compelling need for innovation in testing to generate solutions that were cheap, easy to use, fast, and ubiquitous. Since lung imaging is a useful diagnostic tool, during the pandemic many research groups began to look for solutions using AI and computer vision [125]. As lung imaging is an important resource in emergency medicine for optimal triage of patients with suspected COVID-19 infection, computer vision solutions aimed to be a rapid analysis element that could speed up patient management times. From 2019 to 2020, a nearly two-fold increase in the number of publications on the artificial intelligence applied to medical imaging was observed. Moreover, starting from zero publications in 2019, by 2020 about 15% of all deep learning research associated with medical imaging was on COVID-19. With respect to the focus on the type of medical imaging, it was observed that of all the proposed computer vision solutions, almost half (49.7%) were focused only on X-rays. The remaining modalities were CT (38.7%), multimodality (10.2%), and ultrasonography (1.5%) [125]. As the research progressed, the usefulness of ultrasound as a tool for the diagnosis and management of COVID-19 was also observed. The ease of maintaining sterility, the possibility of performing bedside operations, the reduced time to obtain the image, and the possibility of using only one operator for the procedure have made this imaging modality highly suitable for this pandemic. The group of Born et al. opened the door to the use of deep learning with ultrasound for COVID-19 screening [126, 127]. Several groups followed with different proposals and today, the field has grown considerably by extending applications to other pathologies [128, 129].

Advertisement

10. Challenges for the field

As we briefly mentioned in one of the previous sections, one of the biggest challenges facing the field of AI and computer vision applied to medicine is the availability of datasets. Generating general datasets, although it is a task that requires time, can be done in a more laborsaving way. For instance, it does not require a high degree of training to classify common images. In fact, some search engines ask their users when they access specific content to first select from a group of images those that have a traffic light in it. That generates labels and in this way very large datasets are built. As we also mentioned before, in order to generate medical image datasets, trained doctors are needed to perform the same activity. That requirement makes the process complex, time-consuming, and expensive [25, 26]. Another problem facing the field is the variability between different hospital centers’ samples. As we have already explained before, the greater the amount of data that the algorithm trains with, the higher its generalization power. However, when the data comes from different hospitals, even if they are in the same city, samples of the same medical condition may suffer variations in color, brightness, contrast, and position, to mention just a few. These variations respond to the different equipment used by hospitals and the different sample preparation techniques that different laboratories may have. This variability is manifested in its maximum expression in computational pathology. Moreover, the most current works usually include studies with different scanners and from different hospitals to analyze the robustness of the model [124]. Another challenge that specifically affects computational pathology is the weight of each sample. As we mentioned before, the WSI of the pathology samples contains a considerable amount of pixels, especially at their highest magnification level. This makes it challenging to be able to share the images and store complete datasets. It is worth mentioning that also operating digitally with these images raises the hardware requirements to high levels. For this reason, parallelization tasks or image batch processing can become complex, which also increases processing times [130]. Finally, a crucial aspect must be addressed. Operating with medical images requires a high degree of data protection and the use of anonymization techniques. In order to use hospital data, an ethics committee must first review the scope of the project. The ethics committee will determine the degree of consent that patients must provide in order to use their data. In many retrospective studies, depending on the amount of private data being used, committees may approve the waiving of informed consent (IC). For example, if patients have already consented to the original study and no further identifying data will be added to the project, this may be a favorable setting for not requiring additional IC. However, that decision rests solely with the committee and this entity will decide the constraints of the project. Ethics committees may be slow to grant project approval, especially if the scope of the project is extensive. Also, should new ICs be required, this can also add cost and time to the project [131].

11. Innovating through challenges

The challenges that have crossed the field of AI and computer vision in healthcare have also promoted the search for solutions. This search has sparked ideas and achieved some interesting proposals that are slowly being incorporated into daily practice. To begin with, the problem of generating labels in WSIs gave rise to a new technique called multiple instance learning (MIL). This technique uses as labels only the diagnosis of the patient (which is usually available in EHRs). Thanks to this new approach, a group managed to analyze 44,732 WSIs without any kind of data curation, incredibly speeding up project times [132]. As we also mentioned, the variability between samples from different hospitals is a problem that threatens the creation of large datasets. One of the solutions to this problem was the creation of stain normalization. This is a method that in one of its variants uses autoencoders and allows to standardize of the color distribution in the images, using another image as a template [133]. Thanks to this method, it is possible to have more homogeneous images, even if they come from different laboratories. Regarding the weight of the WSIs, generally, only a small part of the image is used by the deep learning models for the task they perform. For example, as the image passes through the successive layers of a CNN, the information is reduced. In the last layers, only the essential information remains that will complete the task with the least possible error. Using this principle, one group created the concept of neural compression. Basically what this group proposed is to create abstract representations of the WSI images after passing through successive steps in a convolutional network. In this way, noise is removed at each step and only a small, compressed representation remains [134]. This concept would help store WSIs more efficiently with only the information needed for the task. Finally, to provide the greatest privacy protection to patients and also speed up data exchange processes, blockchain networks and interplanetary file system (IPFS) can be used. In this way, the information is decentralized, which reduces the risk of data leakage. In addition, the different hospitals participating in the study can provide the files, which can be fragmented and hashed according to IFPS. The entire process would be governed by one or several smart contracts, which would ensure that only authorized nodes contribute data or extract data. Smart contracts may also contain portions of sensitive information, which would eliminate the need for human interaction and the possible breach of confidentiality [135, 136, 137].

12. Conclusions

The use of AI and computer vision algorithms, especially neural networks, has advanced greatly in recent years. The various applications with different types of medical images have made numerous diagnostic and prognostic applications available to the medical field. The field of oncology has seen the greatest number of developments. Particularly, computational pathology applied to oncology has developed a high degree of diversification in vision tasks, achieving models that could perform diagnosis, subtyping, grading, staging, and prognosis. However, just as innovative applications have emerged, the field has also had to overcome obstacles, which are still complex to analyze for some conditions today. The difficulty of constructing medical datasets, the variability of samples between different institutions, and the mandatory data protection are some of them. However, these obstacles have promoted the creation of ideas to overcome them and that is how we have neural compression and stain normalization that can be great allies to exponentially expand the datasets. Finally, the COVID-19 pandemic was a major trigger for research in AI and computer vision applied to the field of medical imaging, specifically lung imaging. It could be seen that a modeler of the research landscape was the feasibility in the clinical field. In fact, the ease of use, the short operating time, and the possibility of maintaining sterility were part of the parameters that promoted the use of ultrasonography expanding the research with deep learning in this imaging modality. Despite these great advances, more studies must be done to further refine computer vision models to ensure that patients receive the best quality of medical care.

References

  1. 1. Domingues I, Pereira G, Martins P, Duarte H, Santos J, Abreu PH. Using deep learning techniques in medical imaging: A systematic review of applications on ct and pet. Artificial Intelligence Review. 2020;53(6):4093-4160
  2. 2. Barros B, Lacerda P, Albuquerque C, Conci A. Pulmonary covid-19: Learning spatiotemporal features combining cnn and lstm networks for lung ultrasound video classification. Sensors. 2021;21(16):5486
  3. 3. Szeliski R. Computer Vision: Algorithms and Applications. Switzerland AG: Springer Nature; 2022
  4. 4. Bhatt D et al. Cnn variants for computer vision: History, architecture, application, challenges and future scope. Electronics. 2021;10(20):2470
  5. 5. LeCun Y et al. Handwritten digit recognition with a back-propagation network. Advances in Neural Information Processing Systems. 1989;2
  6. 6. Lauzon FQ. An introduction to deep learning. In: 2012 11th International Conference on Information Science, Signal Processing and their Applications (ISSPA). Institute of Electrical and Electronics Engineers; 2012. pp. 1438-1439
  7. 7. Krizhevsky A, Sutskever I, Hinton GE. Imagenet classification with deep convolutional neural networks. Communications of the ACM. 2017;60(6):84-90
  8. 8. Khan A, Sohail A, Zahoora U, Qureshi AS. A survey of the recent architectures of deep convolutional neural networks. Artificial Intelligence Review. 2020;53(8):5455-5516
  9. 9. Alzubaidi L et al. Review of deep learning: Concepts, cnn architectures, challenges, applications, future directions. Journal of Big Data. 2021;8(1):1-74
  10. 10. Esteva A et al. Deep learning-enabled medical computer vision. NPJ Digital Medicine. 2021;4(1):1-9
  11. 11. Datta S, Bernstam EV, Roberts K. A frame semantic overview of nlp-based information extraction for cancer-related ehr notes. Journal of Biomedical Informatics. 2019;100:103301
  12. 12. Evans RS. Electronic health records: Then, now, and in the future. Yearbook of Medical Informatics. 2016;25(S 01):S48-S61
  13. 13. Lehmann CU, Altuwaijri M, Li Y, Ball M, Haux R. Translational research in medical informatics or from theory to practice. Methods of Information in Medicine. 2008;47(01):1-3
  14. 14. Huo Y, Deng R, Liu Q, Fogo AB, Yang H. Ai applications in renal pathology. Kidney International. 2021;99(6):1309-1320
  15. 15. Goodfellow I, Bengio Y, Courville A. Deep Learning. Massachusetts Institute of Technology Press; 2016
  16. 16. Temerinac-Ott M et al. Detection of glomeruli in renal pathology by mutual comparison of multiple staining modalities. In: Proceedings of the 10th International Symposium on Image and Signal Processing and Analysis. Institute of Electrical and Electronics Engineers; 2017. pp. 19-24
  17. 17. George J, Skaria S, Varun V, et al. Using yolo based deep learning network for real time detection and localization of lung nodules from low dose ct scans. In: Medical Imaging 2018: Computer-Aided Diagnosis. Vol. 10575. SPIE; 2018. pp. 347-355
  18. 18. Ginley B et al. Computational segmentation and classification of diabetic glomerulosclerosis. Journal of the American Society of Nephrology. 2019;30(10):1953-1967
  19. 19. Gibson E et al. Automatic multi-organ segmentation on abdominal ct with dense v-networks. IEEE Transactions on Medical Imaging. 2018;37(8):1822-1834
  20. 20. Cha KH, Hadjiiski L, Samala RK, Chan H-P, Caoili EM, Cohan RH. Urinary bladder segmentation in ct urography using deep-learning convolutional neural network and level sets. Medical Physics. 2016;43(4):1882-1896
  21. 21. Suganthi K et al. Review of medical image synthesis using gan techniques. In: ITM Web of Conferences. Vol. 37. Édition Diffusion Presse Sciences; 2021. p. 01005
  22. 22. Brock A, Donahue J, Simonyan K. Large scale gan training for high fidelity natural image synthesis. arXiv preprint arXiv:1809.11096. 2018
  23. 23. Cossio M. Computational pathology in renal disease: A comprehensive perspective. arXiv preprint arXiv:2210.10162. 2022
  24. 24. He K, Zhang X, Ren S, Sun J. Deep residual learning for image recognition. In: Proceedings of the Institute of Electrical and Electronics Engineers Conference on Computer Vision and Pattern Recognition. 2016. pp. 770-778
  25. 25. Alzubaidi L et al. Towards a better understanding of transfer learning for medical imaging: A case study. Applied Sciences. 2020;10(13):4523
  26. 26. Tan C, Sun F, Kong T, Zhang W, Yang C, Liu C. A survey on deep transfer learning. In: International Conference on Artificial Neural Networks. Springer; 2018. pp. 270-279
  27. 27. Shorten C, Khoshgoftaar TM. A survey on image data augmentation for deep learning. Journal of Big Data. 2019;6(1):1-48
  28. 28. Alzubaidi L, Al-Shamma O, Fadhel MA, Farhan L, Zhang J, Duan Y. Optimizing the performance of breast cancer classification by employing the same domain transfer learning from hybrid deep convolutional neural network model. Electronics. 2020;9(3):445
  29. 29. Samsir S, Sitorus JHP, Ritonga Z, Nasution FA, Watrianthos R, et al. Comparison of machine learning algorithms for chest x-ray image covid-19 classification. In: Journal of Physics: Conference Series. Vol. 2021. Institute of Physics Publishing; 1933. p. 012040
  30. 30. Jiang Y, Yang M, Wang S, Li X, Sun Y. Emerging role of deep learning-based artificial intelligence in tumor pathology. Cancer Communications. 2020;40(4):154-166
  31. 31. Corne J, Au-Yong I. Chest X-Ray Made Easy E-Book. 5th ed. Elsevier Health Sciences; 2022. p. 202. ISBN: 9780702082344, ISBN: 9780702082368
  32. 32. Moses DA. Deep learning applied to automatic disease detection using chest x-rays. Journal of Medical Imaging and Radiation Oncology. 2021;65(5):498-517
  33. 33. Pesce E, Withey SJ, Ypsilantis P-P, Bakewell R, Goh V, Montana G. Learning to detect chest radiographs containing pulmonary lesions using visual attention networks. Medical Image Analysis. 2019;53:26-38
  34. 34. Kim Y-G et al. Short-term reproducibility of pulmonary nodule and mass detection in chest radiographs: Comparison among radiologists and four different computer-aided detections with convolutional neural net. Scientific Reports. 2019;9(1):1-9
  35. 35. Zech JR, Badgeley MA, Liu M, Costa AB, Titano JJ, Oermann EK. Variable generalization performance of a deep learning model to detect pneumonia in chest radio-graphs: A cross-sectional study. PLoS Medicine. 2018;15(11):e1002683
  36. 36. Stephen O, Sain M, Maduh UJ, Jeong D-U. An efficient deep learning approach to pneumonia classification in healthcare. Journal of Healthcare Engineering. 2019;2019:2040-2295
  37. 37. Ravi V, Narasimhan H, Pham TD. A cost-sensitive deep learning-based meta-classifier for pediatric pneumonia classification using chest x-rays. Expert Systems. 2022;39(1):e12966
  38. 38. Khan AI, Shah JL, Bhat MM. Coronet: A deep neural network for detection and diagnosis of covid-19 from chest x-ray images. Computer Methods and Programs in Biomedicine. 2020;196:105581
  39. 39. Apostolopoulos ID, Mpesiana TA. Covid-19: Automatic detection from x-ray images utilizing transfer learning with convolutional neural networks. Physical and Engineering Sciences in Medicine. 2020;43(2):635-640
  40. 40. Vogado L, Araújo F, Neto PS, Almeida J, Tavares JMR, Veras R. A ensemble methodology for automatic classification of chest x-rays using deep learning. Computers in Biology and Medicine. 2022;145:105442
  41. 41. Asif S, Zhao M, Tang F, Zhu Y. A deep learning-based framework for detecting covid-19 patients using chest x-rays. Multimedia Systems. 2022;28(13):1-19
  42. 42. Lopes U, Valiati JF. Pre-trained convolutional neural networks as feature extractors for tuberculosis detection. Computers in Biology and Medicine. 2017;89:135-143
  43. 43. Lakhani P, Sundaram B. Deep learning at chest radiography: Automated classification of pulmonary tuberculosis by using convolutional neural networks. Radiology. 2017;284(2):574-582
  44. 44. Taylor AG, Mielke C, Mongan J. Automated detection of moderate and large pneumothorax on frontal chest x-rays using deep convolutional neural networks: A retrospective study. PLoS Medicine. 2018;15(11):e1002697
  45. 45. Jun TJ, Kim D, Kim D. Automated diagnosis of pneumothorax using an ensemble of convolutional neural networks with multi-sized chest radiography images. arXiv preprint arXiv:1804.06821. 2018
  46. 46. Feng S, Liu Q, Patel A, Bazai SU, Jin CK, Kim JS, et al. Automated pneumothorax triaging in chest X-rays in the New Zealand population using deep-learning algorithms. Journal of medical imaging and radiation oncology. 2022;66(8):1035-1043. DOI: 10.1111/1754-9485.13393
  47. 47. Wang X, Peng Y, Lu L, Lu Z, Bagheri M, Summers RM. Chestx-ray8: Hospitalscale chest x-ray database and benchmarks on weakly-supervised classification and localization of common thorax diseases. In: Proceedings of the Institute of Electrical and Electronics Engineers Conference on Computer Vision and Pattern Recognition. 2017. pp. 2097-2106
  48. 48. Liu J, Wang S, Linguraru MG, Yao J, Summers RM. Computer-aided detection of exophytic renal lesions on non-contrast ct images. Medical Image Analysis. 2015;19(1):15-29
  49. 49. Vivanti R, Szeskin A, Lev-Cohain N, Sosna J, Joskowicz L. Automatic detection of new tumors and tumor burden evaluation in longitudinal liver ct scan studies. International Journal of Computer Assisted Radiology and Surgery. 2017;12(11):1945-1957
  50. 50. Perez AA et al. Deep learning ct-based quantitative visualization tool for liver volume estimation: Defining normal and hepatomegaly. Radiology. 2022;302(2):336-342
  51. 51. Gerard SE, Patton TJ, Christensen GE, Bayouth JE, Reinhardt JM. Fissurenet: A deep learning approach for pulmonary fissure detection in ct images. IEEE Transactions on Medical Imaging. 2018;38(1):156-166
  52. 52. Choe J et al. Content-based image retrieval by using deep learning for interstitial lung disease diagnosis with chest ct. Radiology. 2022;302(1):187-197
  53. 53. Dormer JD, Halicek M, Ma L, Reilly CM, Schreibmann E, Fei B. Convolutional neural networks for the detection of diseased hearts using ct images and left atrium patches. In: Medical Imaging 2018: Computer-Aided Diagnosis. Vol. 10575. Society for Optics and Photonics; 2018. pp. 671-677
  54. 54. Hoori A, Hu T, Lee J, Al-Kindi S, Rajagopalan S, Wilson DL. Deep learning segmentation and quantification method for assessing epicardial adipose tissue in ct calcium score scans. Scientific Reports. 2022;12(1):1-10
  55. 55. Liu J, Lu L, Yao J, Bagheri M, Summers RM. Pelvic artery calcification detection on ct scans using convolutional neural networks. In: Medical Imaging 2017: Computer-Aided Diagnosis. Vol. 10134. Society for Optics and Photonics; 2017. pp. 319-325
  56. 56. Lyu J, Ling SH. Using multi-level convolutional neural network for classification of lung nodules on ct images. In: 2018 40th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC). Vol. 2018. Institute of Electrical and Electronics Engineers. pp. 686-689
  57. 57. Näppi JJ, Pickhardt P, Kim DH, Hironaka T, Yoshida H. Deep learning of contrast-coated serrated polyps for computer-aided detection in ct colonography. In: Medical Imaging 2017: Computer-Aided Diagnosis. Vol. 10134. Society for Optics and Photonics; 2017. pp. 114-120
  58. 58. Wesp P et al. Deep learning in ct colonography: Differentiating premalignant from benign colorectal polyps. European Radiology. 2022;32(7):1-11
  59. 59. Oda H et al. Dense volumetric detection and segmentation of mediastinal lymph nodes in chest ct images. In: Medical Imaging 2018: Computer-Aided Diagnosis. Vol. 10575. Society for Optics and Photonics; 2018. p. 1057502
  60. 60. Murugesan M, Kaliannan K, Balraj S, Singaram K, Kaliannan T, Albert JR. A hybrid deep learning model for effective segmentation and classification of lung nodules from ct images. Journal of Intelligent & Fuzzy Systems. 2022;Preprint:1-13
  61. 61. Sourlos N, Wang J, Nagaraj Y, van Ooijen P, Vliegenthart R. Possible bias in supervised deep learning algorithms for ct lung nodule detection and classification. Cancers. 2022;14(16):3867
  62. 62. Belharbi S et al. Spotting l3 slice in ct scans using deep convolutional network and transfer learning. Computers in Biology and Medicine. 2017;87:95-103
  63. 63. Vaquero JJ, Kinahan P. Positron emission tomography: Current challenges and opportunities for technological advances in clinical and preclinical imaging systems. Annual Review of Biomedical Engineering. 2015;17:385-414
  64. 64. Blanc-Durand P, Van Der Gucht A, Schaefer N, Itti E, Prior JO. Automatic lesion detection and segmentation of 18f-fet pet in gliomas: A full 3d u-net convolutional neural network study. PLoS One. 2018;13(4):e0195798
  65. 65. Wang X et al. Heart and bladder detection and segmentation on fdg pet/ct by deep learning. BMC Medical Imaging. 2022;22(1):1-13
  66. 66. Huang B, Chen Z, Wu PM, Ye Y, Feng ST, Wong CO, et al. Fully Automated Delineation of Gross Tumor Volume for Head and Neck Cancer on PET-CT Using Deep Learning: A Dual-Center Study. Contrast Media & Molecular Imaging, 2018:8923028. DOI: 10.1155/2018/8923028
  67. 67. Zhao L, Lu Z, Jiang J, Zhou Y, Wu Y, Feng Q. Automatic nasopharyngeal carcinoma segmentation using fully convolutional networks with auxiliary paths on dual-modality pet-ctimages. Journal of Digital Imaging. 2019;32(3):462-470
  68. 68. Ypsilantis P-P et al. Predicting response to neoadjuvant chemotherapy with pet imaging using convolutional neural networks. PLoS One. 2015;10(9):e0137036
  69. 69. Cheng D, Liu M. Combining convolutional and recurrent neural networks for alzheimer’s disease diagnosis using pet images. In: 2017 IEEE International Conference on Imaging Systems and Techniques (IST). Institute of Electrical and Electronics Engineers; 2017. pp. 1-5
  70. 70. Pereira G. Deep learning techniques for the evaluation of response to treatment in hogdkin lymphoma [Ph.D. dissertation]. Universidade of Coimbra Publishing Registry; 2018
  71. 71. Lundervold AS, Lundervold A. An overview of deep learning in medical imaging focusing on mri. Zeitschrift für Medizinische Physik. 2019;29(2):102-127
  72. 72. Fritz B, Fritz J. Artificial intelligence for mri diagnosis of joints: A scoping review of the current state-of-the-art of deep learning-based approaches. Skeletal Radiology. 2022;51(2):315-329
  73. 73. Turkbey B, Haider MA. Deep learning-based artificial intelligence applications in prostate mri: Brief summary. The British Journal of Radiology. 2022;95(1131):20210563
  74. 74. Verburg E et al. Deep learning for automated triaging of 4581 breast mri examinations from the dense trial. Radiology. 2022;302(1):29-36
  75. 75. Cole JH et al. Predicting brain age with deep learning from raw imaging data results in a reliable and heritable biomarker. NeuroImage. 2017;163:115-124
  76. 76. Moeskops P et al. Evaluation of a deep learning approach for the segmentation of brain tissues and white matter hyperintensities of presumed vascular origin in mri. NeuroImage: Clinical. 2018;17:251-262
  77. 77. Liu M, Zhang J, Adeli E, Shen D. Landmark-based deep multi-instance learning for brain disease diagnosis. Medical Image Analysis. 2018;43:157-168
  78. 78. Yoo Y et al. Deep learning of joint myelin and t1w mri features in normal-appearing brain tissue to distinguish between multiple sclerosis patients and healthy controls. NeuroImage: Clinical. 2018;17:169-178
  79. 79. Perkuhn M et al. Clinical evaluation of a multiparametric deep learning model for glioblastoma segmentation using heterogeneous magnetic resonance imaging data from clinical routine. Investigative Radiology. 2018;53(11):647
  80. 80. Laukamp KR et al. Fully automated detection and segmentation of meningiomas using deep learning on routine multiparametric mri. European Radiology. 2019;29(1):124-132
  81. 81. Bobo MF et al. “Fully convolutional neural networks improve abdominal organ segmentation,” in Medical Imaging 2018: Image Processing, Society for Optics and Photonics. vol. 10574. 2018. pp. 750-757
  82. 82. Kline TL et al. Performance of an artificial multi-observer deep neural network for fully automated segmentation of polycystic kidneys. Journal of Digital Imaging. 2017;30(4):442-448
  83. 83. Shehata M et al. Computer-aided diagnostic system for early detection of acute renal transplant rejection using diffusion-weighted mri. Institute of Electrical and Electronics Engineers Transactions on Biomedical Engineering. 2018;66(2):539-552
  84. 84. Forsberg D, Sjöblom E, Sunshine JL. Detection and labeling of vertebrae in mr images using deep learning with clinical annotations as training data. Journal of Digital Imaging. 2017;30(4):406-412
  85. 85. Lu J-T et al. Deep spine: Automated lumbar vertebral segmentation, disc-level designation, and spinal stenosis grading using deep learning. In: Machine Learning for Healthcare Conference, PMLR. 2018. pp. 403-419
  86. 86. Wang X et al. Searching for prostate cancer by fully automated magnetic resonance imaging classification: Deep learning versus non-deep learning. Scientific Reports. 2017;7(1):1-8
  87. 87. Callen PW. Ultrasonography in Obstetrics and Gynecology E-Book. Elsevier Health Sciences; 2011
  88. 88. Wang Y, Ge X, Ma H, Qi S, Zhang G, Yao Y. Deep learning in medical ultrasound image analysis: A review. Institute of Electrical and Electronics Engineers Access. 2021;9:54310-54324
  89. 89. Zhu Y-C et al. A generic deep learning framework to classify thyroid and breast lesions in ultrasound images. Ultrasonics. 2021;110:106300
  90. 90. Jabeen K et al. Breast cancer classification from ultrasound images using probability-based optimal deep learning feature fusion. Sensors. 2022;22(3):807
  91. 91. Pavithra S, Vanithamani R, Justin J. Classification of stages of thyroid nodules in ultrasound images using transfer learning methods. In: International Conference on Image Processing and Capsule Networks. Springer; 2021. pp. 241-253
  92. 92. Nonsakhoo W, Saiyod S, Sirisawat P, Suwanwerakamtorn R, Chamadol N, Khuntikeo N. Liver ultrasound image classification of periductal fibrosis based on transfer learning and fcnet for liver ultrasound images analysis system. In: in 2021 International Conference on Computing, Communication, and Intelligent Systems (ICCCIS). Institute of Electrical and Electronics Engineers; 2021. pp. 569-575
  93. 93. Căleanu CD, Sırbu CL, Simion G. Deep neural architectures for contrast enhanced ultrasound (ceus) focal liver lesions automated diagnosis. Sensors. 2021;21(12):4126
  94. 94. Zhou H, Liu B, Liu Y, Huang Q, Yan W. Ultrasonic Intelligent Diagnosis of Papillary Thyroid Carcinoma Based on Machine Learning. Journal of Healthcare Engineering. 2022:6428796. DOI: 10.1155/2022/6428796
  95. 95. Yap MH et al. Breast ultrasound region of interest detection and lesion localisation. Artificial Intelligence in Medicine. 2020;107:101880
  96. 96. Ni D. et al. Selective Search and Sequential Detection for Standard Plane Localization in Ultrasound. In: Yoshida H, Warfield S, Vannier MW, editors. Abdominal Imaging. Computation and Clinical Applications. ABD-MICCAI 2013. Lecture Notes in Computer Science. Vol. 8198. Berlin, Heidelberg: Springer; 2013. DOI: 10.1007/978-3-642-41083-3_23
  97. 97. Wu L, Cheng J-Z, Li S, Lei B, Wang T, Ni D. Fuiqa: Fetal ultrasound image quality assessment with deep convolutional networks. Institute of Electrical and Electronics Engineers Transactions on Cybernetics. 2017;47(5):1336-1349
  98. 98. Kumar V et al. Automated and real-time segmentation of suspicious breast masses using convolutional neural network. PLoS One. 2018;13(5):e0195816
  99. 99. Zhang Y, Ying MT, Yang L, Ahuja AT, Chen DZ. Coarse-to-fine stacked fully convolutional nets for lymph node segmentation in ultrasound images. In: 2016 Institute of Electrical and Electronics Engineers International Conference on Bioinformatics and Biomedicine (BIBM). IEEE; 2016. pp. 443-448
  100. 100. Sun S et al. Deep learning prediction of axillary lymph node status using ultrasound images. Computers in Biology and Medicine. 2022;143:105250
  101. 101. Jain PK, Sharma N, Giannopoulos AA, Saba L, Nicolaides A, Suri JS. Hybrid deep learning segmentation models for atherosclerotic plaque in internal carotid artery b-mode ultrasound. Computers in Biology and Medicine. 2021;136:104721
  102. 102. Jain PK et al. Unseen artificial intelligence—Deep learning paradigm for segmentation of low atherosclerotic plaque in carotid ultrasound: A multicenter cardiovascular study. Diagnostics. 2021;11(12):2257
  103. 103. Feldman AT, Wolfe D. Tissue processing and hematoxylin and eosin staining. Methods Mol Biol. 2014;1180:31-43. DOI: 10.1007/978-1-4939-1050-2_3. PMID: 25015141
  104. 104. Jain RK et al. Atypical ductal hyperplasia: Interobserver and intraobserver variability. Modern Pathology. 2011;24(7):917-923
  105. 105. Araújo T et al. Classification of breast cancer histology images using convolutional neural networks. PLoS One. 2017;12(6):e0177544
  106. 106. Jiang Y, Chen L, Zhang H, Xiao X. Breast cancer histopathological image classification using convolutional neural networks with small se-resnet module. PLoS One. 2019;14(3):e0214587
  107. 107. Ragab M, Albukhari A, Alyami J, Mansour RF. Ensemble deep-learning-enabled clinical decision support system for breast cancer diagnosis and classification on ultrasound images. Biology. 2022;11(3):439
  108. 108. Balaha HM, Saif M, Tamer A, Abdelhay EH. Hybrid deep learning and genetic algorithms approach (hmb-dlgaha) for the early ultrasound diagnoses of breast cancer. Neural Computing and Applications. 2022;34(11):8671-8695
  109. 109. Teramoto A, Tsukamoto T, Kiriyama Y, Fujita H. Automated Classification of Lung Cancer Types from Cytological Images Using Deep Convolutional Neural Networks. BioMed Research International. 2017:4067832. DOI: 10.1155/2017/4067832
  110. 110. Wang S et al. Comprehensive analysis of lung cancer pathology images to discover tumor shape and boundary features that predict survival outcome. Scientific Reports. 2018;8(1):1-9
  111. 111. Chen Y et al. A whole-slide image (wsi)-based immunohistochemical feature prediction system improves the subtyping of lung cancer. Lung Cancer. 2022;165:18-27
  112. 112. Korbar B et al. Deep learning for classification of colorectal polyps on whole-slide images. Journal of Pathology Informatics. 2017;8(1):30
  113. 113. Kather JN et al. Deep learning can predict microsatellite instability directly from histology in gastrointestinal cancer. Nature Medicine. 2019;25(7):1054-1056
  114. 114. Song JH, Hong Y, Kim ER, Kim S-H, Sohn I. Utility of artificial intelligence with deep learning of hematoxylin and eosin-stained whole slide images to predict lymph node metastasis in t1 colorectal cancer using endoscopically resected specimens; prediction of lymph node metastasis in t1 colorectal cancer. Journal of Gastroenterology. 2022;57(9):654-666
  115. 115. Soldatov SA, Pashkov DM, Guda SA, Karnaukhov NS, Guda AA, Soldatov AV. Deep learning classification of colorectal lesions based on whole slide images. Algorithms. 2022;15(11):398
  116. 116. Wang S et al. Rmdl: Recalibrated multi-instance deep learning for whole slide gastric image classification. Medical Image Analysis. 2019;58:101549
  117. 117. Sharma H, Zerbe N, Klempert I, Hellwich O, Hufnagl P. Deep convolutional neural networks for automatic classification of gastric carcinoma using whole slide images in digital histopathology. Computerized Medical Imaging and Graphics. 2017;61:2-13
  118. 118. Schaumberg AJ, Rubin MA, Fuchs TJ. H&e-stained whole slide image deep learning predicts spop mutation state in prostate cancer. BioRxiv. 2018:064279
  119. 119. Singhal N et al. A deep learning system for prostate cancer diagnosis and grading in whole slide images of core needle biopsies. Scientific Reports. 2022;12(1):1-11
  120. 120. Guan Q et al. Deep convolutional neural network vgg-16 model for differential diagnosing of papillary thyroid carcinomas in cytological images: A pilot study. Journal of Cancer. 2019;10(20):4876
  121. 121. Wang Y et al. Using deep convolutional neural networks for multi-classification of thyroid tumor by histopathology: A large-scale pilot study. Annals of Translational Medicine. 2019;7(18):468-468
  122. 122. Yi Z et al. Deep learning identified pathological abnormalities predictive of graft loss in kidney transplant biopsies. Kidney International. 2022;101(2):288-298
  123. 123. Hermsen M et al. Convolutional neural networks for the evaluation of chronic and inflammatory lesions in kidney transplant biopsies. The American Journal of Pathology. 2022;192(10):1418-1432
  124. 124. Lipkova J et al. Deep learning-enabled assessment of cardiac allograft rejection from endomyocardial biopsies. Nature Medicine. 2022;28(3):575-582
  125. 125. Born J et al. On the role of artificial intelligence in medical imaging of covid-19. Patterns. 2021;2(6):100269
  126. 126. Born J et al. Pocovid-net: Automatic detection of covid-19 from a new lung ultrasound imaging dataset (pocus). arXiv preprint arXiv:2004.12084. 2020
  127. 127. Born J et al. Accelerating detection of lung pathologies with explainable ultrasound image analysis. Applied Sciences. 2021;11(2):672
  128. 128. Wang J et al. Review of machine learning in lung ultrasound in covid-19 pandemic. Journal of Imaging. 2022;8(3):65
  129. 129. Zhao L, Lediju Bell MA. A review of deep learning applications in lung ultrasound imaging of covid-19 patients. BME Frontiers. 2022;2022
  130. 130. Hemati S. Learning Compact Representations for Efficient Whole Slide Image Search in Computational Pathology. UWSpace. 2022. Available from: http://hdl.handle.net/10012/18637
  131. 131. Astromskė K, Peičius E, Astromskis P. Ethical and legal challenges of informed consent applying artificial intelligence in medical diagnostic consultations. AI & Society. 2021;36(2):509-520
  132. 132. Campanella G et al. Clinical-grade computational pathology using weakly supervised deep learning on whole slide images. Nature Medicine. 2019;25(8):1301-1309
  133. 133. Janowczyk A, Basavanhally A, Madabhushi A. Stain normalization using sparse autoencoders (stanosa): Application to digital pathology. Computerized Medical Imaging and Graphics. 2017;57:50-61
  134. 134. Tellez D, Litjens G, van der Laak J, Ciompi F. Neural image compression for gigapixel histopathology image analysis. IEEE Transactions on Pattern Analysis and Machine Intelligence. 2019;43(2):567-578
  135. 135. Sun J, Yao X, Wang S, Wu Y. Blockchain-based secure storage and access scheme for electronic medical records in ipfs. IEEE Access. 2020;8:59389-59401
  136. 136. Kumar S, Bharti AK, Amin R. Decentralized secure storage of medical records using blockchain and ipfs: A comparative analysis with future directions. Security and Privacy. 2021;4(5):e162
  137. 137. Cossio M. Ethereum, Ipfs and Neural Compression to Decentralize and Protect Patient Data in Computational Pathology. Camrbigde Open Engage. 2022. Preprint

Written By

Manuel Cossio

Reviewed: 23 January 2023 Published: 15 February 2023