Open access peer-reviewed chapter

Computer Simulation and the Practice of Oral Medicine and Radiology

By Saman Ishrat, Akhilanand Chaurasia and Mohammad Husain Khan

Submitted: August 9th 2019Reviewed: October 10th 2019Published: May 6th 2020

DOI: 10.5772/intechopen.90082

Downloaded: 29

Abstract

The practice of Oral Medicine and Radiology has long been considered an art form. Collecting and collimating the enormous amount of information each patient brings has always tested the best of our abilities as diagnosticians. However, as the tide of smartphones, cheaper data access, and automation rises, it threatens to wash away all that we have held sacrosanct about conventional clinical practices. In this tussle between what is traditional and what is tantalizing, it is time to question, as diagnosticians, how much can we accede to the invasion of algorithms. How does computer simulation affect the practice of diagnosis in the field of Oral Medicine and Radiology?

Keywords

  • computer simulation
  • artificial intelligence
  • diagnosis

1. Introduction

We have come a long way since the time our ancestors rolled off the first round piece of wood and invented a wheel. Over the years that tiny stroke of luck has rolled on and become a vastly sophisticated industry that demands state-of-the-art technology, and even political and social changes to keep humanity on the move. Today, developments in one field set up ripples that affect the development of other fields. Like seismic waves, the repercussions of that change are felt all over. So, in a time defined largely by software, it is no surprise that technology today is the next second away and about to disrupt the practice of our profession. As with any change, it remains to be seen whether the growing dependence on computers will be for better or for worse.

2. Defining the diagnosis

According to Miller [1], diagnosis is way more than connecting the name of a disease or syndrome with the findings for a patient. It is a recurring process in which the details of a patient such as history, symptoms, signs and how the disease process has unfolded over time, and eventually how that process affects the patient’s life, count [1]. The diagnosis of an individual involves a series of information which includes history, symptoms, physical exams, laboratory tests and clinical image interpretations which potentially coincides with the etiology of the patient’s illness. The diagnosis of some of the diseases may involve the response of an individual to “therapeutic intervention” and the response is studied to show characteristics of a particular disease [2]. For example, the dentist might prescribe antibiotic for 1 week to a patient with decayed tooth and tooth pain. The antibiotic resolves the infection and pain which shows that the patient had secondary infection going on in the carious tooth. The ultimate feature in the diagnosis of disease is to see if the planned therapy is working or if the disease is getting better or getting worse over a period of time, and have there been considerable side effects to the therapy [2]. Evaluation of the diagnoses is not only limited to living individuals. Post-mortem and autopsy examinations have proved to be a great tool in designing the diagnostic parameters. Artificial intelligence and computer learning have cashed upon the complex series of diagnosis in these days to assist humans in coming up with a module for diagnostic processes. In the current scenario, the possibility that artificial intelligence (abbreviated henceforth as AI) and computer aided systems may supersede humans in the diagnostic process, is an impending reality [1].

3. Factors influencing the diagnosis

The diagnostic process is considered to be a complex transition process which begins with the illness of the patient and ends into a result which serves as a data for reference that can be categorized. The diagnoses start from the doctor asking the patient about signs and symptoms of a disease/illness. A doctor’s treatment specific to a patient’s symptoms and its outcome is important for both the doctor and the patient to see the efficacy of the treatment provided.

The diagnoses are an amalgamation of various processes that broadly depends on three main factors:

3.1 Doctor’s knowledge

“The eye sees all, but the mind shows us what we want to see.”

-Shakespeare.

The doctor’s knowledge serves as data set for the human brain to process the likes and similarities from the previous acquired data. This helps in differentiating the two processes. And thus helps to form an opinion regarding a process. This is really critical for diagnosis.

For example, while diagnosing a Central Giant Cell Granuloma of the jaw, the doctor must also be aware of the differential diagnoses, and their appearance to rule those out.

3.2 Doctor’s experience

The experience of a doctor plays an important role in the diagnosis of a disease. These experiences contribute in enriching the quality of the data and help in refining the data set and recreating subsets in the data. It is here that heuristics step into the picture. These subsets help in simplifying the data and make it comprehensible. The experiences can also be governed by the amount of different cases seen by the doctor, which in turn is greatly influenced by the geo-fencing of the same.

For example, the diseases that are predominant in some areas of the world, like Lyme’s disease in the North Eastern American region, the physicians practicing there will have more experience of those cases and will form a more accurate diagnosis in comparison to the physicians in any other part of the world. Basically the age old dictum at work here is that—if you hear hooves think horses, not zebras.

3.3 Influencing factors

Human brain is a complex structure which plays a pivotal role in making a diagnosis. According to Charles Sherigton (c. 1920), some of the deepest mysteries facing science in the twenty-first century concern the higher functions of the central nervous system: perception, memory, attention, learning, language, emotion, personality, social interaction, decision-making, motor control, and consciousness. Nearly all psychiatric and many neurological disorders are characterized by a dysfunction in the neural systems that mediate these neural processes. In fact, all aspects of human behavior and hence human society are controlled by the human brain: economics and decision making, moral reasoning and law, arts and esthetics, social and global conflict, politics and political decision making, marketing and preference, etc. These functions are greatly altered by the level of stress and the mood that the person has. Thus, a diagnosis is also greatly influenced by the doctor’s state of mind, and stress level (Figure 1).

Figure 1.

Diagnostic paradigm.

4. Computer simulation: understanding AI (artificial intelligence) in computer-aided diagnosis (CAD)

AI and how to use it in CAD has become one of the hottest research topics in medical radiology both in imaging and diagnostics. Although, research in CAD is pretty much established and growing but most radiologists do not as yet, use CAD in their daily routine. The basics of AI and how to use it in CAD for detection and for quantification is defined by the various requirements such as performance, regulatory compliance, reading time reduction and cost efficiency are even today not as sophisticated/dependable as the human mind. Overall the performance of the CAD systems is still a major bottleneck for adaption. However, the usual machine learning and AI strategy can be used to improve CAD by using past and public databases for training and validation. This will create cognitive AI that will help tackle corner cases in CAD and eventually create superior algorithms [3].

Yet all said and done, there is a global consensus that the advent of computer simulation is a crisis in the making for radiology. Not only has the number of imaging studies gone up, but also the number of images per study has drastically increased [4]. Radiology is becoming a victim of its own success, i.e., the disparity and the gap between the overall workload and the number of radiologists has increased dramatically which has resulted in a cost increase. Therefore, new solutions are needed to handle the spurt of data and workload. Computer simulation may be the answer to this evident problem. The key is to use AI and CAD to quicken the diagnostic process and minimize diagnostic errors.

Although, CAD is far much more than just a detection tool but CAD is now widely used as a general term for detection that includes aided extraction of quantitative data from radiology images. A very interesting fact about the algorithm development is that detection and quantification both use the same underlying principle for algorithm creation.

The overall growth in computer simulation or CAD is driven by Moore’s law, i.e., the computational power doubles every 2 years [5]. This has been true for the last 5 decades and should continue for at least another decade. Futurists like Kurzweil [6] talk about singularity of AI, i.e., within a decade, a $1000 computer will have the computational strength of a human brain and eventually the power of hundreds of human brains by 2040. The availability of cheaper and faster hardware has allowed for quicker computations and bigger and cleaner databases for algorithm training. All this has led to quicker and better CAD performance and results.

5. History of computer simulation and artificial intelligence (AI)

In the 1980s, the Kurt Rossmann Laboratories for Radiologic Image Research in the Department of Radiology at the University of Chicago first started the systematic research in developing and designing the CAD systems for the diagnosis of the diseases. (Computer-Aided Diagnosis in Medical Imaging: Historical Review, Current Status and Future Potential). Before this there was a significant amount of studies and researches going on in the picture archiving and communication system (PACS) [7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25].

As a matter of fact, the PACS were useful in storing the pictures and reducing the cost for the storage to the hospitals but at that time, it was not thought that the stored pictures, nowadays referred to the data, might be of any clinical significance to the doctors or the clinicians? The storage was one of the fringe benefits of PACS, but the major value addition was the formation of a sample set or data. The researchers started thinking how this data could help the doctors in diagnostic process. This led to the theory of computer-aided diagnosis (CAD) and artificial intelligence (AI) led diagnosis [26].

6. Rise of artificial intelligence (AI) in healthcare: transformative future

The sophistication of artificial intelligence (AI) in doing what humans do has increased by leaps and bounds in the last one decade. In 2019, AI is a fact today and we have seen a shift in the conversation. We are no longer answering the question—what is AI? Today, the primary concern is answering the question—how can we utilize the plethora of information to replicate the human actions in a more efficient and faster way? No other sector is answering this question better than Healthcare. Artificial intelligence and computer simulations are no longer a novelty and if things progress the way they are, these may soon be the norm.

The use cases for AI in healthcare are vast and ever evolving. Just like AI has become a seminal part of our daily lives, AI is also transforming our healthcare ecosystem. When AI is applied strategically to this ecosystem, it not only has the ability to deeply impact the way healthcare is delivered but also how that the healthcare impacts the overall cost structure.

7. Innovation curve

Today, AI has pushed innovation in healthcare to the next level by combining the training data sets with cognitive computing to draw new insights and correlations. This has been possible because of predictive capabilities, complex algorithms and analytics to deliver real time data that is clinically relevant, i.e., transforming healthcare in new ways.

First and foremost, is to help people stay healthy and eventually reducing the frequency of patient and doctor interaction. The new health apps encourage people to live a healthy lifestyle. AI equips healthcare professionals to better understand everyday health patterns and the needs of their patients. The increased use of consumer hardware technologies such as Apple Watch and other medical devices combined with AI is used in pilot projects such as detecting early-stage heart disease. Thus, helping healthcare professionals to better detect and monitor underlying life-threatening events at early, more treatable stages. With more and more money being invested in projects like Apple Health and Common Health of Android platform, the upheavals in how we practice as diagnosticians, are going to be tectonic.

Recently, life threatening diseases such as cancer are being detected more accurately by AI in their early stages. Based on the study done by American Cancer Society, a large proportion of mammograms eventually result in false positives, i.e., 1 in 2 healthy women are diagnosed with cancer when they have none. Using AI in review and translation process of mammograms may help to avoid unnecessary biopsies.

8. Simulation and requirements for artificial intelligence (AI)-aided systems

AI has to meet several demands to be used widely in clinical practice. The major four requirements that we think, are of paramount importance for AI guiding computer simulations, to be helpful in the field of Oral Radiology, are:

  1. AI should improve radiologists’ performance—which means that efficacy and the accuracy of detecting the aberrancy in the scan should be picked up by the AI system [3].

  2. AI should save time—it should save the radiologist in detecting and diagnosing a disease. One of the important factors which adds efficacy of a system for any machine is that it reduces time. If an AI system is not decreasing the time for the diagnosis process then it is not helping the radiologist at its 100% [3].

  3. AI must be seamlessly integrated into the workflow—The AI system should be a part of the diagnosis process without being a process in itself. It should make the diagnosis process easy and viable to the doctors [3].

  4. AI should not impose liability concerns—The AI system used should be HIPAA compliant system and there should have a foolproof close system to prevent any data breach [3].

Most AI systems, and therefore computer simulations used in diagnostics that are based on these, today do not meet all requirements, and this is why most applications described in the rapidly growing body of scientific literature on CAD are not widely used in clinical practice.

9. Artificial intelligence (AI) for lesion detection

Presently, the existing CAD systems are pushed as complementary tools for radiologists to further evaluate certain images that need attention. CADs have a limitation though, i.e., it does not detect all potential lesions and would limit the radiologist to focus only on the areas that the CAD system has identified. Therefore, it is imperative that the radiologist does the evaluation of the complete image. But the CAD system can help detect lesions that the radiologist might have missed [27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41].

10. The heart of computer simulation: working of artificial intelligence (AI) system

It is very complicated and difficult for computers to decipher radiologic images. To understand an image, the CAD system breaks down the issue into multiple parts and does a step-by-step process to conclude whether a specific area on a radiology image looks suspicious. Therefore, it is important that radiologists have a basic understanding to comprehend why the output of the CAD is off from the usual even if it is because of human error.

10.1 Preprocessing

Most of the AI systems have an input or baseline dataset and starts preprocessing the data before it undergoes further changes through the scanning software. Series of calibrations are done to refine the data. These include resampling the data and removing the noises in the image. The basic reason for this process is to make sure that the existing dataset evolves. Since the AI system works on the knowledge from the previous data set which in turn comes from a binary data set (numbers 0 and 1) so the basic changes in the aberrancies from the data set can be easily pointed out for the doctors/radiologist to bring attention to. These aberrancies can be studied by the doctors (Figure 2) [3].

Figure 2.

(a) The CBCT reconstruction image with the crude image that was just captured from the patient and stored in to the system. (b) Enhanced image after the preprocessing with the color defects fixed. (c) Enhanced image after the preprocessing with the noise defects fixed.

10.2 Demarcation

The second step is segmentation or demarcation of the normal structures in the data set. This step includes the demarcation, and consequent categorization of anatomic regions. This is the toughest step in the process and it is the most studied area. This also decides the accuracy of the AI system. As compared to the human knowledge which greatly relies on the differentiation of the structures and the artifacts from the prior knowledge the AI system greatly depend on the data sets. The more the data set the more refined algorithms are going to be (Figure 3) [28].

Figure 3.

CBCT panoramic reconstruction with demarcation of the anatomic structure showing the lower border of the mandible.

10.3 Detection of the aberrancy

The next big step is the aberrancy detected and identified matching the system to the subsets from the normal. These locations are called as the candidates. These candidates can be polyps, tumors, calcifications, dysplasia [3]. This step is very sensitivity driven. The AI system has to make sure the sensitivity crosses a certain threshold whereas the specificity can be low. It is important for us to understand these aberrancies may not necessarily be anomalies and are subject to the doctor’s acumen to decipher and diagnose (Figure 4).

Figure 4.

CBCT panoramic reconstruction of a Cherubism showing aberrancy from the normal data set.

10.4 Scrutinizing the aberrancy

As mentioned previously, the extracted data has to be very sensitive for the next stage, which is scrutinizing the aberrancy. In this step each area is closely analyzed to rule out the normal variations. This is done by using the vector space paradigm, which means each aberrancy from the normal is given a feature which can be computed. Each candidate has a feature and has its own mean value and standard deviations. The border gradients are also described accordingly. This is paramount because now each aberrancy pointed out in the last step, is represented by a vector. These vectors can be mathematically represented in the space which is important to gauge anything in the machine’s learning pattern, which is the basis of AI systems. These pattern analyses and machine learning can be used in any system to quantify the data and convert those into the computer’s language [29, 30, 31, 32].

10.5 Stratification

The learning patterns and the recognitions are classified and stratified in the space where the normal and the abnormal candidates exist. The next step is training the classified data set. Consistency is a big deciding factor here. The normal candidates are classified and stratified consistently with the in one subsets whereas the abnormal candidates are classified with consistency in the other data set. This provides training for the AI system to form an opinion regarding the classified and stratified data. This training is done with the help of person who had prior knowledge and could feed the data top provide reference standard with correct information. For example, the doctor can point out to the location of the cherubism on the CBCT pantographic reconstruction view with prior knowledge of its location. Also, data, like age, can be a deciding factor which is fed into the system by the doctor and this too aids the diagnostic process [29, 30, 31, 32].

This is a very complicated step because some of the basic aberrancies which might not be the disease may also have the same locations or some diseases may not be in their classical locations. Therefore, data enrichment and stratification should be done on a regular basis.

The AI system, as discussed before, learns and adapts from the quality of the data set. So, we need more exposure to the data set to refine the results. AI researchers are always in the ongoing process of learning and experimenting with the classification schemes.

10.6 Output of AI

The process comes to completion when a degree of suspicion is assigned to each and every candidate in the strata or the group. The threshold decided by the doctor is the parameter used to test the degree of suspicion. These degrees of suspicion crossing the threshold are demarcated with the identifiers which can be circles or arrows. The AI system learns from the threshold and the final outcomes and enriches its data and subsets of the data. The results that come from the previous learning go to the computer’s learning curve. For example, in the CBCT shown below if features of cherubism are detected, and the doctor diagnosed it as a case of cherubism, then the result is saved in the system for future reference for other cases. If CBCT for the case of a swollen angle of jaw presents a similar data, then the computer uses its previous knowledge to make sure whether the indicated region is considered true or false positive [3].

11. Clinical applications

There is a big range of applications of the AI and CAD in various fields of medicine. These applications have received a premarket approval (PMA). This encompasses the devices which have been shown to pose serious levels of risk for the users. In these cases the FDA guidelines recommend newer devices could be safer and more effective in the near future.

In the last decade there has been a considerable increase in their accuracy as systems for diagnostic help [25, 26]. The AI systems in mammography have been successful in detecting differences between the mass lesions and micro-calcifications [42]. For micro-calcifications it has shown that the AI systems show high performance and sensitivity which is greatly used by doctors in making an educated decision during diagnosis making. The data from multiple views and combination of different modalities like ultrasonography (US), magnetic resonance (MR) imaging, and digital breast tomosynthesis have shown to be really promising in enhancing the diagnosis and enriching the data for future diagnoses [34].

12. Computer simulation and the future of diagnostic expertise

In 1996, the American Journal of Roentgenology published a report of three cases of diagnostic errors in radiology. After assessing the clinicians’ defense of their decisions, the author concluded that the radiologists missed out on the diagnosis because they did not think of the lesion rather than not know of it. It is popularly known as the aunt Minnie effect—that is if a woman in a picture looks like Aunt Minnie, she must be aunt Minnie [43]. Improving patient care means, we have to look for ways to minimize diagnostic errors- an umbrella term that includes as varied factors as personal or social bias, heuristics and even failure of perception [44]. Debiasing programs work but may take a lifetime to refine, even doctors who are willing to accept error on their part [44]. In the present world however, where software changes by the day, we do not happen to have that luxury of time.

Populations in countries have increased but investments in health and education have not kept pace with the same. Add to that an increasingly unstable world both politically and environmentally. All of these changes mean shifting populations and overburdened hospital based care provider. With the rise in longevity and polypharmacy, one diagnostician may not be and indeed, cannot be held accountable, for missing out on a few pertinent points here and there, while assessing a case. In today’s outpatient medicine practice, especially, that of Oral Medicine and Radiology, there is hardly any time to think. And much to our chagrin, the reduced pay of practicing doctors, and increased work time of a hospital based doctor only means, what the New England Journal of Medicine, recently referred to as subsistence-level intellectual mode [45]. In such a scenario, the probability of diagnostic errors grows by leaps and bounds.

Where computer simulation steps in, is this very ripe environment of piling information and very few humans, qualified to process it. As of now, there are gaping holes in the way AI processes the information it collects, and human mind is, as yet, ahead of it. AI is heavily reliant on data sets that its algorithms work on. The stark problem with these data sets is that they are not representative of the wide gamut of humanity out there that needs treatment. For AI to pick a patient on its scanner, first of all the patient should have access to a device that lets this patient connect to its virtual world. Disadvantaged populations or displaced populations may not have that. This inherent bias of the system leaves it firmly in the field of speculation over its accuracy and therefore dependability.

Algorithms, however, tend to improve themselves over time; deleting redundancies and communicating across other platforms as they pick more and more data, as has been discussed earlier in the chapter. With the advent of quantum computing, in what has been defined by The Economist as the field’s Sputnik moment, Google has recently demonstrated its ability to perform a task in little over 3 min, what might take most powerful classical computers about 10,000 years to complete [46]. In a world, where Common Health of Android and Apple Health of iOS, will increasingly guide the logistics of collecting and collimating data, medical or otherwise, from Electronic health records, medical devices, software, apps, and smartphones, the art of diagnosis will witness seismic changes. The quality of information being used to create data sets is one of the many hurdles.

Yet, as the horse carriage eventually gave way to sophisticated cars, we will have to yield a part of the field to computer simulation. But as we do that, we have to remember, the algorithm, as yet, does not have a totalitarian power over the mind. At the end of the day, the most important part of patient care is after all, care. And a responsive warm doctor in the field of Oral Medicine and Radiology is still very much preferred over subservience to any cold computing device. Computer simulations, then, are just another set of tools in our armamentarium and we need to research how to use the same for the benefit and better experience of everyone involved.

13. Conclusion

In a world, increasingly driven on ever evolving computation, diagnostic medicine has to adapt to change. Harnessing the power of AI to prevent logjams of channeling information collected into a cohesive whole will benefit both the doctor providing care and the patient receiving it. Computer simulation is a vital tool and how, and how much will it change the topography of diagnostic medicine, remains to be seen.

How to cite and reference

Link to this chapter Copy to clipboard

Cite this chapter Copy to clipboard

Saman Ishrat, Akhilanand Chaurasia and Mohammad Husain Khan (May 6th 2020). Computer Simulation and the Practice of Oral Medicine and Radiology, Numerical Modeling and Computer Simulation, Dragan M. Cvetković and Gunvant A. Birajdar, IntechOpen, DOI: 10.5772/intechopen.90082. Available from:

chapter statistics

29total chapter downloads

More statistics for editors and authors

Login to your personal dashboard for more detailed statistics on your publications.

Access personal reporting

Related Content

This Book

Next chapter

Simulation and Parametric Inference of a Mixed Effects Model with Stochastic Differential Equations Using the Fokker-Planck Equation Solution

By Bakrim Fadwa, Hamid El Maroufy and Hassan Ait Mousse

Related Book

First chapter

Introductory Chapter: Computer Simulation

By Dragan Cvetković

We are IntechOpen, the world's leading publisher of Open Access books. Built by scientists, for scientists. Our readership spans scientists, professors, researchers, librarians, and students, as well as business professionals. We share our knowledge and peer-reveiwed research papers with libraries, scientific and engineering societies, and also work with corporate R&D departments and government entities.

More About Us