Open access peer-reviewed chapter

AI in Healthcare: Implications for Family Medicine and Primary Care

Written By

Thomas Wojda, Carlie Hoffman, Jeffrey Jackson, Tracey Conti and John Maier

Submitted: 27 February 2023 Reviewed: 01 April 2023 Published: 29 April 2023

DOI: 10.5772/intechopen.111498

Chapter metrics overview

150 Chapter Downloads

View Full Metrics

Abstract

Artificial Intelligence (AI) has begun to transform industries including healthcare. Unfortunately, Primary Care and the discipline of Family Medicine have tended to lag behind in the implementation of this novel technology. Although the relationship between Family Medicine and AI is in its infancy greater engagement from Primary Care Physician’s (PCP’s) is a must due to the increasing shortage of practitioners. AI has the chance to overturn this problem as well as speed up its development. Considering the vast majority of PCP’s utilize Electronic Medical Records (EMR’s) the field is ripe for innovation. Regrettably, much of the information available remains unused for practice disruption. Primary Care offers a large data platform that can be leveraged with the use of technology to deliver ground-breaking trails forward to provide better comprehensive care for a wide-variety of patients from various backgrounds. The purpose of this chapter is to provide context to AI implementation as it relates to Primary Care and the practice of Family Medicine.

Keywords

  • artificial intelligence
  • machine learning
  • technology
  • primary care
  • family medicine
  • screening
  • management
  • and treatment

1. Introduction

Although Artificial Intelligence (AI) in Healthcare has recently become trendy, the concept is not new. Alan Turning developed the concept of machines that could think around the 1950’s [1]. Soon thereafter, John McCarthy proposed the term “Artificial Intelligence” to describe the process of computers that could perform the cognitive functions of humans. Since these early propositions healthcare has seen a monumental increase in data available for interpretation. Consequently, the power and usefulness of computers in data analysis has become paramount to the success of a healthcare organization as it is unrealistic for individuals and even highly organized teams to extrapolate important information. Subsequently, various medical societies and disciplines have invested heavily in AI to meet the growing demands of modern medicine. Alarmingly, Family Medicine appears to lag behind other specialties in advancing its footprint in the AI healthcare space. Specifically, the American Board of Family Medicine performed an extensive literature review in the year 2020 and found no publications for this specialty during that time despite knowledge that Family Medicine scholars were actively pursuing research related to Primary Care and AI [2].

The importance of the discipline of Family Medicine being actively involved in AI research cannot be understated as historically this profession has lagged behind when the adoption of new technology takes place. Subsequently, the discipline and more importantly, the patients have needlessly suffered for it. For example, when the Health Information Technology for Economic and Clinical Health (HITECH) Act was passed, it was widely assumed the introduction of electronic medical records (EMR) would enhance the patient, physician, and organizational experience through the optimization of efficient, equitable, and effective healthcare delivery [3]. Certainly, EMR’s have had numerous positive impacts on an individual patient and systems-based perspective [4]. No one would rightly argue for a return to paper charts and hand-written notes. Nevertheless, we cannot ignore the role the implementation of EMR has had on increased physician burnout and decreased face-to-face time with patients. Moreover, because of the lack of Family Medicine involvement in the role-out of EMR’s many in our field strongly feel as though its usability, interoperability, and applicability have fallen short of the initial intended goals of EMR. This is likely due to lack of engagement from family physicians in the design, advocacy, and implementation of EMR. Accordingly, there is a rising concern that healthcare technology has grown to suit hospital administrators more than patients and physicians [4].

With the advancement of AI, the specialty of Family Medicine must be an active participant to further influence this transformation. The relationship-oriented nature of Family Medicine will allow for technology to focus on providing value to patients and communities as opposed to administrators and technology companies. Healthcare costs continue to escalate and without FM providers who are focused on providing value AI will likely only exacerbate the sentiment that only those who can afford such advances in healthcare will benefit from it. The ethos of Family Medicine is that the development of the therapeutic relationship optimizes treatment outcomes and positively effects health on a population level. Without this belief AI will only further reduce the patient-physician interaction through increased screen-time. Family Medicine practitioners pride themselves on seeing a diverse patient panel. Consequently, if Family Physicians voices are not heard AI may amplify existing biases. Specifically, algorithms used by recognition programs have demonstrated challenges in recognizing persons of color secondary to limited participation [5].

Computers process information faster, more efficiently, and more systematically than humans. They make judgments more regularly and act in response to variations faster. Currently, computers perform automated repetitive tasks once assigned to humans. Clinical decision support systems alert providers when immunizations are due, automatic American Society for Cardiovascular Disease (ASCVD) risk calculators provide myocardial infarction risk assessment, and advertisements for potential drug–drug or allergic reactions pop up before a medication may be administered or prescribed. Moreover, AI can already complete challenging multifactorial jobs to create an accurate differential diagnosis and evidence-based assessment and plan. Worryingly, machines autonomously managing patients may give administrators pause as to the value of human physicians.

Family Practitioner engagement in AI is a must. Primary care offers the grandest healthcare distribution platform and provides an influential stage for data use [6]. Family Practitioners are operations specialists who may practice approaches for the adoption of scientifically validated AI tools. Family Medicine practitioners focus on patient-oriented outcomes and will publish results that affect the patients [7]. Family Medicine practitioners operate between extensive delivery systems practitioners such as mental health, home health, and public health. The familiarity with various healthcare stakeholders will enhance AI performance tools.

Because the amount of data we will manage will only increase, there needs to be a strategy both for the here-and-now and beyond. Without this, Primary Care risks becoming incapacitated, subjugated to metrics, and more prone to burnout. Ultimately, AI should be used to enhance the time we spend with patients and complement the Family Medicine experience. NLP aids computers to comprehend, deduce, and use an individual’s vernacular. Moreover, AI may unearth material from prior visits, images, labs, and health data to composite them into the right documents so providers can focus on the human-connection [8]. AI chatbots can replicate human dialog, assist individuals in receiving the optimal care, utilizing advanced technology through patient surveillance between visits and provider consultations [3]. Patients suffering from congestive heart failure (CHF) may broadcast their weight through internet-enabled scales, have their diuretic doses titrated, or ensure that their worsening symptoms are being examined by their PCP. Patients may be reminded of health maintenance services like breast and colon cancer screening, provided education for shared decision making, create referrals, book appointment, and organize tests that need to be performed [9]. AI may data mine environmental, EMR, claims, and pharmaceutical data and integrate these to identify and treat high risk patients afflicted with asthma, MI’s, and opioid overdoses to aid appropriate management. AI may explore massive amount of data, convey measures, close care gaps, and most importantly allow providers to spend more time with patients.

While these tools are fascinating, they are not yet equipped to being put into practice. They necessitate improvement, investigation, and substantiation. Privacy, malpractice, and overtreatment must all be carefully weighed and dealt with. Without consideration of fitting payment models AI will be imperfect to influence healthcare delivery. Figure 1 provides a guide for Family Medicine providers on how to get better involved in leveraging this technology.

Figure 1.

Steps for family physicians to get involved in artificial intelligence research.

Family physicians pride themselves on the personal relationships they form with their patients. Computers will outdo physicians when it pertains to the performance of complicated undertakings. Nevertheless, creating and upholding strong relationships, recognizing and handling their intricacies, and eliciting and integrating preferences into medical decisions are difficult for technology to replicate. Humans and computers must complement each-other to enable physicians to spend more time with their patients.

Advertisement

2. Methodology

Internet searches using Google Scholar and PubMed was done with the fundamental words “Artificial Intelligence, Machine Learning, Technology, Primary Care, Family Medicine, and screening, management, and treatment.” Additional citations were acquired through cross-referencing the main studies. Following the literature review all relevant contributors to the manuscript created an outline that identified the historical context of AI technology in relation to the discipline of Primary Care and Family Medicine, clinical implications for Primary Care and AI technology, and the role of AI technology and Graduate Medical Education as key pieces to include in this chapter. Other studies not pertaining to the aforementioned themes were excluded. The following manuscript includes various purposes employing Artificial Intelligence instruments presently in operation or in progress is portrayed. The names of the articles and their abstracts were vetted by one assessor (T.W). Complete manuscripts were reread for insertion by two authors.

Advertisement

3. AI & Clinical applications for family physicians

The applicability of AI for Family Physicians is vast. A scoping review condensed targeted health conditions that could be aided either through diagnostic or treatment decision support that include: cardiovascular, psychiatric/neurologic/cognitive, diabetic/metabolic/chronic, skin conditions, musculoskeletal, cancer, pulmonary, gastrointestinal, general, and other conditions [10]. A comprehensive breakdown of the use of AI in Family Medicine for diseases is beyond the scope of this chapter. Below, the authors provide a systems-based outline of various AI-related diagnostic and therapeutic modalities for family practitioners to be made aware of as well as more focused sub-sections on diabetes screening, management, and treatment, as well as breast cancer screening given their relevance to PCP’s in the outpatient setting and to give the reader a better understanding of the depths to AI research in helping clinicians optimize patient outcomes.

3.1 Neurology

Alzheimer’s disease (AD) may account for up to 80% of dementia cases and is a huge cost for society both economically and socially [11]. Although much advancement has been made in the underlying pathophysiological mechanisms of this disease as well as targeted approaches to therapy a significant barrier to any breakthrough occurs in the Identification of patients who will develop AD so that they are able to enroll in clinical trials at the appropriate time to examine the effectiveness of potential disease alternating treatment modalities. To combat this, a study was performed using machine learning (ML) to assess advancement to dementia by two years using data from amyloid positron emission tomography scans [12]. The high accuracy that ML demonstrated relative to standard algorithms holds promise for the development of better individuals to be included in AD clinical trials with the hope that that this will optimize their design and lead to advancement of targeted disease therapies.

3.2 Head, eyes, ears, nose, throat (HEENT)

Glaucoma, an increase of intraocular pressure that results in optic nerve damage and ultimately blindness, can be diagnosed with AI. Through the proliferation of a neural network, retinal images have been mined to aid in in diagnosis of Glaucoma with up to 96% accuracy [13].

Diabetic retinopathy (DR), a common microvascular problem of diabetes, is also a significant source of irreparable loss of sight [14]. This disease and subsequent loss of vision can be averted and assorted therapeutic selections are obtainable. Despite calls for routine screening for DR comprehensive strategies face difficulty with implementation [15]. Implementation issues include: inadequate trained personnel, lack of resources, and inability to cope with an increased disease burden. To combat this concern, a deep-based learning algorithm was created to validate the detection of DR [16]. Retinal images were compared to that of trained ophthalmologists. Results showed high accuracy when compared to current standards of care, which may lead to more efficient and accessible screening for DR.

3.3 Cardiovascular

Cardiovascular disease (CVD), the foremost cause of illness and death globally, consumes extensive preventative measures to curtail risk factors for disease development that center around controlling hypertension, lowering cholesterol, smoking cessation, and optimizing diabetes management. Including age, risk factors for development of CVD are mainly predicted using validated instruments [17, 18, 19, 20]. Nevertheless, many people are still at risk for the development of CVD and are unable to be identified with these tools. What’s more approximately 50% of myocardial infarctions and strokes will occur in people that do not meet screening criteria and thus are considered to be low risk [21]. Fortunately, machine learning provides a chance to expand precision by taking advantage of multifaceted connections among risk factors. For example, in a prospective cohort study machine learning correctly predicted additional individuals who got CVD versus a standard set of rules [22]. These results show that ML may identify more individuals who might be helped from anticipatory therapy and help others eschew pointless therapy.

3.4 Gastrointestinal (GI)

Gastro-esophageal reflux disease (GERD) is the presence esophageal mucosal interruptions or occurrence of reflux-induced symptoms that significantly impairs quality of life [23]. Symptom evaluation and assessment is vital for disease management. Sadly, symptom evaluation and effects of reflux are currently insufficiently correlated with disease severity. Furthermore, given the ambiguity of these relationships no diagnostic tool remains reliable. A retrospective study of 150 patients compared AI in the form of an artificial neural network (ANN) comprised of 45 clinical variables versus the current standards to esophagoscopy or pH-metry. The use of ANN to make a diagnosis of GERD demonstrated superior accuracy [24]. Although this work is still in the preliminary stages it shows promise in delivering a non-invasive approach to the diagnosis of GERD.

3.5 Endocrine

Diabetes affects millions of people around the world, accounts for approximately 12% of global health expenditures, and still one in two persons continue to be unaware they have the disease and are sub-optimally treated [25]. Early intensive mediation may prevent onset and decelerates the development of retinopathy, nephropathy, neuropathy, and other difficulties associated with diabetes [26]. Lack of timely, crucial health data is vital for the patient and provider to make well-educated decisions in regards to diabetics care. AI may provide timely information concerning a diabetic patient’s health. A review of literature shows that the relationship between AI and diabetes management can be group into four categories that include automated retinal screening that was discussed above, clinical decision support, predictive population risk stratification, and patient self-management support tools [27].

AI-driven extrapolative modeling proactively recognizes diabetics with the greatest risk for needless complications that create avoidable emergency department outings, hospital stays, and readmissions [28]. AI can dig through various patient information to classify and describe diabetes populations [29]. In addition, patients with risk factors for diabetic comorbid conditions may be discovered [30, 31, 32]. AI may pinpoint individuals who may benefit from specific diabetes disease management programs [32]. On a molecular level it may aid in the discovery of proteins and genes linked with diabetes [33, 34].

AI can run practice decision-support instruments to aid healthcare professionals tailor diabetes treatments that boosts compliance and maximizes outcomes on a population level [35]. AI-powered devices may even diagnose diabetes noninvasively [36]. Furthermore diabetic neuropathy and diabetic wounds may be more accurately measured and treated [37, 38].

There is ongoing research on a Closed Loop System, which is a synthetic pancreas that blends continuous glucose measurement and an algorithm-run insulin pump to enhance diabetes self-management and lower hypoglycemic episodes [39]. A meta-review of 12 trials compared patient acceptance of Artificial Pancreas Devices (APDs) versus standard of care. Based on the results, the authors surmised that the latest APD were safe and demonstrated high patient satisfaction [40].

More investigations are being done to determine the potential of diabetes apps to support persons in tracing and examining their statistics easily and to convey custom-made evidence-based understandings that diabetic patients may employ every day. For example, all-inclusive dietary databanks can describe nutritional subject matter once a barcode is scanned on a smart device, explore food chain options, common food items, or distinguish food stuffs [41]. Machine Learning and representative analysis can diagnose and enumerate complex happenings and the standard of living of diabetic patients and provide assistance so they are better informed about the decisions they make [42]. AI may possibly quicken wound recovery, avoid unnecessary expenses secondary to commutes, and lower medical expenditures with the use of an AI-based smartphone camera [43]. Pregnant women with gestational diabetes have demonstrated approval of AI supplemented telemedicine appointments to help expedite clinical care via the amalgamation of AI interpreted evidence-based procedures, information obtained from EMR’s, and blood sugars, blood pressure readings, and movement sensors [44].

In 2018 Medtronic’s Guardian Connect, the first AI-powered continuous glucose monitoring (CGM) system, was approved by the FDA in service of diabetic patients between the ages of 14 and 75 years. A prognostic system signals patients of substantial oscillation in glycaemia up to an hour before the critical event happens. The system has demonstrated accuracy and has been shown to announce around 98.5% of hypoglycemic occurrences; consequently, patients could potentially seize control to stabilize blood sugar [27]. The records can be collectively distributed and supervised by all relevant stakeholders involved in the patient’s care.

Several questions persist before technological advancements in diabetes care permeate the health care sector. Practical interoperability, the capacity of two or more structures to interchange and utilize data, remains an obstacle [45]. Cost, overhead, continued expenditures, buy-in from healthcare providers and relevant participants, and the various definitions and involvedness surrounding the term Meaningful Use are all additional barriers to implementation [46]. The ability to replicate outcomes from previous studies remains blurry as well. For various reasons proprietary data such as source codes may be difficult to share. For example A survey of approximately 400 algorithms presented in papers at an Artificial Intelligence conference revealed around 6% of the presenters disseminated an organization’s code, a third distributed information utilized to tryout their algorithms, and half provided an abridged of a source code (pseudo-code) [27]. Even if some of this data can be obtained it remains to be seen if the results will end up the same. What’s more in machine learning, which stems from mastery of previous encounters, may be influenced by the typology of speech patterns implemented.

Nevertheless, diabetes remains an attractive target from AI research to apply industrial methods to solve the various complexities surrounding this disease. Many technological products have obtained approval from the FDA, are on the market, and have shown promising results. More innovative approaches are being created to challenge the status quo of current diabetic care by the enhancement of reliability, effectiveness, operability, straightforwardness, and patient, family, and provider, satisfaction with applying these products for diabetic management. Ideally, the right mix of monitoring and appropriate feedback will help isolate telling precedents and head to customized understandings that boost patient and provider commitment, conviction, and achievement in optimizing blood sugar control.

3.6 Hematology/oncology

The utilization of artificial intelligence (AI) in cancer screening is becoming increasingly evident in recent studies across multiple types of cancer. This includes lung, breast, colorectal, and cervical [47, 48, 49, 50]. Given the overwhelming research across multiple disciplines, the focus of this review will be on the evidence-based application of AI in breast cancer screening. This research can be categorized into two applications: risk assessment and image analysis.

The United States Preventive Services Task Force (USPSTF) guidelines of primary screening for breast cancer with conventional mammography has resulted in a reduction of breast cancer mortality across both randomized trials and screening cohort studies [51]. Outlined in the USPSTF recommendations is screening every 2 years for women aged 50–74 years old, as opposed to individualized decision to start screening between the ages of 40–49 years old [51]. In the latter age group, high-risk individuals who would benefit from starting screening at an earlier age can include those with known underlying genetic mutation (such as BRCA1 or BRCA2 gene mutation) or a history of chest radiation at an early age [51]. There are several risk prediction models for breast cancer. One example is the Breast Cancer Risk Assessment Tool (BCRAT), which can be used to estimate a patient’s 5-year and lifetime risk of developing invasive breast cancer. This considers a patient’s age, age of menarche, age of first childbirth, number of first-degree relatives with breast cancer, number of previous biopsies, and presence of atypical hyperplasia in a biopsy. Of note, this tool may not be appropriate for assessing risk in patients with a history for certain medical conditions, such as personal history of certain breast cancer types [52]. Considering multiple qualitative and quantitative risk factors can better stratify risk-based screening and maximize the benefit while minimizing the harms of screening [53]. But how can AI advance current tools of risk assessment?

Breast density has been shown to be an independent risk factor for the development of breast cancer [54]. As a result, this has led to updates in prediction models to include this quantitative risk factor, such as the Tyrer Cuzick model, the Breast Cancer Surveillance Consortium Model (BCSCM), and the Breast and Ovarian Analysis of Disease Incidence and Carrier Estimation Algorithm (COADICEA) [52]. In a recent study, the authors created three models to estimate five-year breast cancer risk. One model only considered risk factors. The second model utilized deep/machine learning on mammographic images. The third model was a hybrid of the two. These models were then compared to the Tyrer–Cuzick model, a well-known clinical standard that recently incorporated mammographic breast density into its calculation. They found that their hybrid model had the highest accuracy, followed by the deep/machine learning model, while the Tyrer–Cuzick model had the lowest. These results indicate that a model that considers both traditional risk factors and mammographic data can improve current practices of assessing risk. Future research can aim to identify the imaging features and patterns that are most useful to stratifying risk [52].

Breast density is typically assessed through interpretation of the standard two-view mammogram by a radiologist. A visual estimation of the proportion of glandular and fatty tissue within the breasts is scored and applied to a scale, such as the Breast Imaging Reporting and Data System (BI-RADS). The four BI-RADS categories of breast composition according to breast density are: type 1 fatty breast, type 2 fibroglandular, type 3 heterogeneously dense, and type 4 dense and homogeneous. This subjective quantification of breast density requires certain training and experience to allow for accurate and reproducible scoring. Even so, there is a certain amount of user variation among radiologists that contributes to error [55].

There are 3 potential approaches to applying AI to mammogram image analysis: as a standalone system, for triage, and for reader aid [56]. In a simulation performed by McKinney et al., the findings demonstrated the ability of an AI system to outperform a group of radiologists in accurately interpreting mammograms [57]. Using deep learning-based AI, Balta et al. found that the breast cancer screening workflow, which typically requires double-reading, could be replaced by a single-reading. This was achieved by AI-driven identification of normal-appearing screening mammograms, which were then verified by a single human reader [56]. Similarly, in a retrospective study by Dembrower et al., AI was used to triage mammograms into those requiring no further radiologist assessment and those requiring further radiologist assessment. This system demonstrated potential for detecting a significant number of cases where breast cancer was not identified by human readers, but then diagnosed later [58]. Rodriguez-Ruiz et al. showed that radiologists interpreting mammograms with the support of an AI computer system performed better at diagnosing breast cancer than without [59].

The application of AI in mammogram-based breast cancer screening is by no means limited to the approaches previously discussed. These include more precise stratification of risk assessment, increasing accuracy of detecting breast cancer during image analysis, and potentially decreasing workload burden in breast imaging radiologists. Given the evidence shown across retrospective studies and simulations, AI has the potential to improve current breast cancer screening practices. As studies continue to explore its application in the various aspects of cancer screening, it is likely that AI will become a more prevalent tool in medicine and, hopefully, lead to better patient outcomes.

Advertisement

4. AI and administrative capabilities

The ambulatory clinic is an indispensable feature of patient-centered medical care. Today, many different stakeholders are involved to ensure the patient experience is enhanced and clinical outcomes optimized. Consequently, ensuring a clinic runs smoothly has proven to be labor intensive. Numerous obstacles to realizing a well-organized workflow for pre-visit planning (PVP) exist. These barriers include a lack of workforce shortages as well as limitations on time. The vast majority of time consumed administering care is sandwiched between appointments. PVP improves the possibility that an appointment will flow more easily, require not as much time, and develops a sophisticated and fulfilling patient-provider experience. AI tools may enhance pre-visit planning (PVP) [60, 61]. PVP contains distinct information built on predictable timetables and patient-provider messages that serve modern EMR and AI well. Criticisms of AI implementation include: absence of needs assessments, minimal real world applicability, and ignored complexity of healthcare with subsequent misallocation of investments [10, 62].

Clinicians are interested in automated PVP if it affords them more time with patients and saves them time on administrative duties. Technology already supports clinician work through: advanced solutions such as chat bots that monitor signs and symptoms, rudimentary functions like electronic sticky notes in the EHR, and updated best practices that serve as a reminder for outstanding or upcoming health maintenance. Current technological advancements include: algorithms that pool healthcare data in order to produce a summary of care gaps [63, 64, 65], automated patient questionnaires sent through a secure electronic portal [66, 67, 68, 69, 70] and programmed schemes that inform providers of requisite activities [65, 71]. The rise of value-based care along with telemedicine secondary to the recent Covid-19 pandemic has moved treatment of patients in the virtual space. This situation means that attention will needed to be further allocated to inter-visit happenings [72].

With the appearance of AI, particular aspect of PVP may be better supported. Unfortunately, there remains of dearth of literature that demonstrates the impartial value of this technology. PVP and its present condition must be further investigated, hindrances to performance examined, and areas for potential automation realized. Technology and AI obviously exhibit an ability to enhances the principally human method for PVP; however unless the structures surrounding value-based care is more refined, than the underestimation and subsequent dearth or compensation for PVP will remain a significant obstruction. Specifically, challenges such as ease of use, confidentiality, safekeeping of patient information, EMR interoperability, and workflow for providers need to be addressed [72].

Advertisement

5. AI and education in family medicine

The final section of this chapter concerns the role of AI and graduate medical education (GME). Family Medicine Residency in the United States is three years. During this time the residents must develop appropriately so that upon completion of their training they may feel confident practicing medicine independently. Although there are many issues that could be addressed concerning GME and AI the two the authors focus on in this chapter include motivational interviewing (MI) and shared decision making (SDM).

5.1 AI & Motivational Interviewing

Motivational Interviewing (MI) is a scientifically validated, short-form interventional style that has been established to positively affect change in chronic disease management. MI is a driving force towards constructive, healthy, patient focused behavior change. MI concentrates on the aims, trepidations, and viewpoint of the patient. Unfortunately, this process often contradicts the directional, instructional, and educational role healthcare providers have undertaken [73, 74]. Therefore providers must unlearn these behaviors to permit a more patient-oriented encounter. Critical skills to master include talking less, listening more, and reflecting on the patient’s wishes. Open-ended questions help facilitate this rapport. Instantaneous feedback greatly enhances skill development [75, 76]. Unfortunately, for a variety of reasons insufficient advice is often given during the early stages of instruction. Consequently due to inadequate and unproductive training MI is underdeveloped.

AI may help to apply MI by delivering timely, well-organized feedback in a time and resource-constrained environment. Real-time Assessment of Dialog in Motivational Interviewing (ReadMI), utilizes natural language processing that delivers specific motivational interviewing indicators that helps pinpoint areas for improvement during the patient’s visit [77]. The benefits of ReadMI include cost-effectiveness, portability, and immediate valuation and breakdown of the MI process. Advantages include: deep-learning-based speech recognition, NLP, AI-human interaction, and mobile cloud-based computing. The following (Figures 2 and 3) demonstrate the architecture, advantages, and encounter process of ReadMI respectively. What’s more, the team involved in the patient interview may go over past cases and correlate the trainee’s behavior and speech with the AI scores. Afterwards, these sessions generate novel records that make possible auxiliary fine-tuning of the program and the natural language based performance coding designation. Currently, ReadMI constructs comprehensive transcriptions of the discourse with greater than 92% accuracy, displays above 95% accuracy when measuring the amount of time the provider speaks versus the patient, and has over 92% accuracy when determining the amount of open-ended versus close-ended questions [77].

Figure 2.

Framework for ReadMI artificial intelligence.

Figure 3.

General flow of the motivational interview process with ReadMI.

ReadMI has been shown to be as valid and reliable as humans when rating the kinds of questions and assertions that trainees yield when performing motivational interviewing. Physicians who are too loquacious in contrast to the patient are doubtful to produce high-level motivational interviewing techniques. These early results show that AI can produce instantaneously reliable scores to relevant stakeholders to enhance the educational experience. Specifically, if a learner talks too much and does not ask enough open-ended questions then the educator can use this information to promptly fine-tune the interview process. Because of the limitations on time, leveled proficiency improvement through AI based measures is invaluable. Moreover, less skewed criticisms directed towards the learner as well as less onerous video review sessions will advance medical education. Finally as clinicians become better decision support agents they may improve healthcare quality by aiding patients in living healthier lives.

5.2 AI and shared decision making

Shared decision making (SDM) is an approach where the patient and provider work in concert to formulate evidence-based medical choices that align with patient values [78]. Thus, the final choice is based on what matters most to the patient with medical data as an adjunct [79]. Unfortunately, real world applicability is lacking [80, 81, 82]. Constraints on time, difficulties with generalizability, and medical circumstances are all obstacles to SDM [83]. AI may advance SDM through better informed decision-making, which allows providers to concentrate their energy on the patient [84]. Furthermore, AI may discover missed correlations by individuals participating in clinical assessments [85]. Nevertheless, bioethical concerns for AI and health decision-making remain [84]. Moreover, AI-based decision aids remain foreign in regards to their patient-centeredness [86]. Lastly, the facilitation of AI and shared decision making remains unknown. A three step scheme has been offered [87, 88]. It is further depicted in (Figure 4).

Figure 4.

3-step model of shared decision making (SDM) for clinical practice.

A scoping review showed the range of AI systems applied to SDM [89]. Sadly, few studies concerned primary care. Of the involved studies, three devised AI interventions for primary care involving the support of chronic conditions such as diabetes and stroke [90, 91, 92]. These studies focused on the decision-making step of SDM either by launching trials to calculate clinically significant results or for medical advice. Wang et al. aimed to tailor knowledge and choices about medications in type 2 diabetics [92]. SDM is essential secondary to the complexity of diabetics. In this report information from an EHR was compiled to aid clinicians with decision support tools to enable patients to better comprehend their well-being. Over 2500 patients with type 2 diabetes, 77 features, and eight different medications were amassed to generate a prototype for reference. The AI model had a correctness of 0.76. The records just pertained to hospitalized individuals and the result of medication utilization was not accounted for. Still, the intervention exhibited practicability and adaptability, meaning if the scheme did not remain current, the mediation could be fine-tuned without any impact to the interoperability of the hospital EHR. Moreover, the program was created with the patient in mind, which allowed key stakeholders to evaluate an individual’s ailment more systematically and modify discussions in an up-to-date manner.

Kökciyan et al. made “CONSULT,” a decision-support agent to help stroke survivors in treatment compliance and self-care in partnership with a practioner [90, 91]. It was generated through an argumentation construct, which is beyond the scope of this chapter. However, a brief description is as follows. Health sensors and EHR information as well as medical standards were used as inputs. Proposals and written descriptions for systematized choices were provided as outputs. The program was carried out with a mobile Android app. Six unpaid workers in decent health were enrolled for one week, used various system features, and were solicited to gather information from wellness sensors and input data. The CONSULT system aided the decision-making point in SDM by showing the latest interpretation of the clinical picture via individualized measurements taken from the health record and wireless sensor input. After, stylistic descriptions of automatic findings supplemented the medical suggestions offered. Overall, the existing, pertinent, concise information plus medication options and proposals helped buoy the patient-provider decision-making moment.

Overall, the relationship between AI and SDM is young. More research is needed to examine, apply, and gage the impact of AI on SDM, standardize its use, and evaluate its impact on choices that effect a population. Importantly, any AI intervention must be human-centered. Lastly, SDM is a stepwise process; therefore research must demonstrate how AI interventions better re-enforce the therapeutic relationship.

Advertisement

6. Discussion

The authors recognized and elaborated on various research studies concerning AI, Family Medicine, and Primary Care and separated this manuscript into three predominant categories. First, on the subject of the history of Family Medicine adoption of technology, an overwhelming trend when contextualizing this issue is the lack of involvement of Family Medicine stakeholders in the literature [10]. Secondly, concerning clinical applicability, there is a wide variety of functions that AI could perform for PCP’s. Clinical trials repeatedly established AI to strengthen problem-solving or management of chronic diseases. Still, the results exhibit Artificial Intelligence remains at an initial phase of development for applicability; therefore much remains to be done to measure AI’s influence on the primary care system.

Advertisement

7. Future research

To conclude this section of the chapter the authors shed some light on novel research and funding to expand the Family Medicine footprint in the AI realm. Specifically, in 2022 the American Board of Family Medicine (ABFM) established a funding program to support Family Medicine Departments in hiring Artificial Intelligence/Machine Learning (AI/ML) focused research faculty. The initial cohort of funded institutions include: University of Houston, University of Pittsburgh, University of California, San Diego and University of Texas, San Antonio. Each institution is pursuing its own focused work with the shared general aims of establishing a sustained AI/ML research presence, securing further external funding and producing peer-reviewed research publications. This program also includes regular convening of the research teams to share progress and information hosted by the Stanford Healthcare AI Applied Research Team.

Advertisement

8. Conclusion

AI in healthcare has arrived. Nevertheless, many Family Physicians are unaware of its uses and how it will impact their practice. Subsequently, Family Medicine remains constrained by its limitations and the ethical implications remain unclear. This chapter hopes to act as a guide to front line health care works like Family Physicians. Primary care is essential to the well-being of a population and is unmatched in its ability to interconnect the various parts of a healthcare system. The profound bonds Family Physicians create with both their patients and community makes this discipline inimitably fitting to steer the health care AI revolution. In order to do so it is vital that Family Physicians collaborate with engineers to guarantee that AI use is pertinent and patient-centered, improves health care AI implementations, and acts inclusively and ethically AI that optimize outcomes and reduce inequities.

References

  1. 1. Turing AM. Computing machinery and intelligence. In: Parsing the Turing Test. Dordrecht, Netherlands: Springer; 2009. pp. 23-65
  2. 2. Newton W. The American Board of Family Medicine: what’s next? American Board of Family Medicine. 2019;32:282-284
  3. 3. Liaw W, Kakadiaris I. Artificial intelligence and family medicine: Better together. Family Medicine. 2020;52(1):8-10
  4. 4. DeVoe JE et al. The ADVANCE network: Accelerating data value across a national community health center network. Journal of the American Medical Informatics Association. 2014;21(4):591-595
  5. 5. Wilson B, Hoffman J, Morgenstern J. Predictive inequity in object detection. arXiv preprint arXiv:1902.11097. 2019
  6. 6. Green LA et al. The Ecology of Medical Care Revisited. Waltham, Massachusetts: Mass Medical Soc; 2001. pp. 2021-2025
  7. 7. Shaughnessy AF, Slawson DC, Bennett JH. Becoming an information master: A guidebook to the medical information jungle. Journal of Family Practice. 1994;39(5):489-500
  8. 8. Deliberato RO, Celi LA, Stone DJ. Clinical note creation, binning, and artificial intelligence. JMIR Medical Informatics. 2017;5(3):e7627
  9. 9. Nakamura N, Koga T, Iseki H. A meta-analysis of remote patient monitoring for chronic heart failure patients. Journal of Telemedicine and Telecare. 2014;20(1):11-17
  10. 10. Kueper JK et al. Artificial intelligence and primary care research: A scoping review. Annals of Family Medicine. 2020;18(3):250-258
  11. 11. Pouryamout L et al. Economic evaluation of treatment options in patients with Alzheimer’s disease: A systematic review of cost-effectiveness analyses. Drugs. 2012;72:789-802
  12. 12. Mathotaarachchi S et al. Identifying incipient dementia individuals using machine learning and amyloid imaging. Neurobiology of Aging. 2017;59:80-90
  13. 13. Samanta S et al. Haralick features based automated glaucoma classification using back propagation neural network. In: Proceedings of the 3rd International Conference on Frontiers of Intelligent Computing: Theory and Applications (FICTA), 2014. Bhubaneswar, Odissa, India: Springer; 2015
  14. 14. Yau JW et al. Global prevalence and major risk factors of diabetic retinopathy. Diabetes Care. 2012;35(3):556-564
  15. 15. Wang LZ et al. Availability and variability in guidelines on diabetic retinopathy screening in Asian countries. British Journal of Ophthalmology. 2017;101(10):1352-1360
  16. 16. Li Z et al. An automated grading system for detection of vision-threatening referable diabetic retinopathy on the basis of color fundus photographs. Diabetes Care. 2018;41(12):2509-2516
  17. 17. Goff DC Jr et al. ACC/AHA guideline on the assessment of cardiovascular risk: A report of the American College of Cardiology/American Heart Association task force on practice guidelines. Circulation. 2013, 2014;129(25_suppl_2):S49-S73
  18. 18. Hippisley-Cox J et al. Predicting cardiovascular risk in England and Wales: Prospective derivation and validation of QRISK2. BMJ. 2008;336(7659):1475-1482
  19. 19. D’Agostino RB Sr et al. General cardiovascular risk profile for use in primary care: The Framingham heart study. Circulation. 2008;117(6):743-753
  20. 20. Ridker PM et al. Development and validation of improved algorithms for the assessment of global cardiovascular risk in women: The Reynolds risk score. JAMA. 2007;297(6):611-619
  21. 21. Ridker PM et al. Rosuvastatin to prevent vascular events in men and women with elevated C-reactive protein. New England Journal of Medicine. 2008;359(21):2195-2207
  22. 22. Weng SF et al. Can machine-learning improve cardiovascular risk prediction using routine clinical data? PLoS One. 2017;12(4):e0174944
  23. 23. Dent J et al. An evidence-based appraisal of reflux disease management—The Genval workshop report. Gut. 1998;44(suppl. 2):S1-S16
  24. 24. Pace F et al. Artificial neural networks are able to recognize gastro-oesophageal reflux disease patients solely on the basis of clinical data. European Journal of Gastroenterology & Hepatology. 2005;17(6):605-610
  25. 25. Atlas D, International diabetes federation. IDF Diabetes Atlas. 7th ed. Vol. 33. Brussels, Belgium: International Diabetes Federation; 2015. p. 2
  26. 26. Control, D. and C.T.R. Group. The effect of intensive treatment of diabetes on the development and progression of long-term complications in insulin-dependent diabetes mellitus. New England Journal of Medicine. 1993;329(14):977-986
  27. 27. Dankwa-Mullan I et al. Transforming diabetes care through artificial intelligence: The future is here. Population Health Management. 2019;22(3):229-242
  28. 28. Han L et al. Rule extraction from support vector machines using ensemble learning approach: An application for diagnosis of diabetes. IEEE Journal of Biomedical and Health Informatics. 2014;19(2):728-734
  29. 29. Wei W-Q et al. A high throughput semantic concept frequency based approach for patient identification: A case study using type 2 diabetes mellitus clinical notes. In: AMIA Annual Symposium Proceedings. Washington, DC: American Medical Informatics Association; 2010
  30. 30. Corey KE et al. Development and validation of an algorithm to identify nonalcoholic fatty liver disease in the electronic medical record. Digestive Diseases and Sciences. 2016;61:913-919
  31. 31. Neves J et al. A soft computing approach to kidney diseases evaluation. Journal of Medical Systems. 2015;39:1-9
  32. 32. Rau H-H et al. Development of a web-based liver cancer prediction model for type II diabetes patients by using an artificial neural network. Computer Methods and Programs in Biomedicine. 2016;125:58-65
  33. 33. Vyas R et al. Building and analysis of protein-protein interactions related to diabetes mellitus using support vector machine, biomedical text mining and network analysis. Computational Biology and Chemistry. 2016;65:37-44
  34. 34. López B et al. Single nucleotide polymorphism relevance learning with random forests for type 2 diabetes risk prediction. Artificial Intelligence in Medicine. 2018;85:43-49
  35. 35. Lo-Ciganic W-H et al. Using machine learning to examine medication adherence thresholds and risk of hospitalization. Medical Care. 2015;53(8):720
  36. 36. Shu T, Zhang B, Tang YY. An extensive analysis of various texture feature extractors to detect diabetes mellitus using facial specific regions. Computers in Biology and Medicine. 2017;83:69-83
  37. 37. Katigari MR et al. Fuzzy expert system for diagnosing diabetic neuropathy. World Journal of Diabetes. 2017;8(2):80
  38. 38. Wang L et al. Area determination of diabetic foot ulcer images using a cascaded two-stage SVM-based classification. IEEE Transactions on Biomedical Engineering. 2016;64(9):2098-2109
  39. 39. DeJournett L, DeJournett J. In silico testing of an artificial-intelligence-based artificial pancreas designed for use in the intensive care unit setting. Journal of Diabetes Science and Technology. 2016;10(6):1360-1371
  40. 40. Thabit H et al. Home use of an artificial beta cell in type 1 diabetes. New England Journal of Medicine. 2015;373(22):2129-2140
  41. 41. Zhang W et al. “Snap-n-eat” food recognition and nutrition estimation on a smartphone. Journal of Diabetes Science and Technology. 2015;9(3):525-533
  42. 42. Cvetković B et al. Activity recognition for diabetic patients using a smartphone. Journal of Medical Systems. 2016;40:1-8
  43. 43. Wang L et al. Smartphone-based wound assessment system for patients with diabetes. IEEE Transactions on Biomedical Engineering. 2014;62(2):477-488
  44. 44. Rigla M et al. Gestational diabetes management using smart mobile telemedicine. Journal of Diabetes Science and Technology. 2018;12(2):260-264
  45. 45. Benson T. Principles of Health Interoperability HL7 and SNOMED. London, UK: Springer Science & Business Media; 2012
  46. 46. Adler-Milstein J et al. Electronic health record adoption in US hospitals: Progress continues, but challenges persist. Health Affairs. 2015;34(12):2174-2180
  47. 47. Espinoza JL, Dong LT. Artificial intelligence tools for refining lung cancer screening. Journal of Clinical Medicine. 2020;9(12):3860
  48. 48. Gräwingholt A. The role of artificial intelligence in breast cancer screening: How can it improve detection? Expert Review of Molecular Diagnostics. 2020;20(12):1161-1162
  49. 49. Mitsala A et al. Artificial intelligence in colorectal cancer screening, diagnosis and treatment. A new era. Current Oncology. 2021;28(3):1581-1607
  50. 50. Xue P, Ng MTA, Qiao Y. The challenges of colposcopy for cervical cancer screening in LMICs and solutions by artificial intelligence. BMC Medicine. 2020;18:1-7
  51. 51. Nelson HD et al. Screening for Breast Cancer: A Systematic Review to Update the 2009 US Preventive Services Task Force Recommendation. Rockville (MD), USA: Agency for Healthcare Research and Quality; 2016
  52. 52. Kim G, Bahl M. Assessing risk of breast cancer: A review of risk prediction models. Journal of Breast Imaging. 2021;3(2):144-155
  53. 53. Houssami N, Kerlikowske K. AI as a new paradigm for risk-based screening for breast cancer. Nature Medicine. 2022;28(1):29-30
  54. 54. Ekpo EU et al. Assessment of interradiologist agreement regarding mammographic breast density classification using the fifth edition of the BI-RADS atlas. American Journal of Roentgenology. 2016;206(5):1119-1123
  55. 55. Martin KE et al. Mammographic density measured with quantitative computer-aided method: Comparison with radiologists’ estimates and BI-RADS categories. Radiology. 2006;240(3):656-665
  56. 56. Freeman K et al. Use of artificial intelligence for image analysis in breast cancer screening programmes: Systematic review of test accuracy. BMJ. 2021;1872:374
  57. 57. McKinney SM et al. Reply to: Transparency and reproducibility in artificial intelligence. Nature. 2020;586(7829):E17-E18
  58. 58. Dembrower K et al. Effect of artificial intelligence-based triaging of breast cancer screening mammograms on cancer detection and radiologist workload: A retrospective simulation study. The Lancet Digital Health. 2020;2(9):e468-e474
  59. 59. Rodríguez-Ruiz A et al. Detection of breast cancer with mammography: Effect of an artificial intelligence support system. Radiology. 2019;290(2):305-314
  60. 60. Lin SY, Mahoney MR, Sinsky CA. Ten ways artificial intelligence will transform primary care. Journal of General Internal Medicine. 2019;34(8):1626-1630
  61. 61. Esteva A et al. A guide to deep learning in healthcare. Nature Medicine. 2019;25(1):24-29
  62. 62. Smith M et al. From code to bedside: Implementing artificial intelligence using quality improvement methods. Journal of General Internal Medicine. 2021;36(4):1061-1066
  63. 63. Wilkinson C, Champion JD, Sabharwal K. Promoting preventive health screening through the use of a clinical reminder tool: An accountable care organization quality improvement initiative. Journal for Healthcare Quality. 2013;35(5):7-19
  64. 64. Kawamoto K et al. Long-term impact of an electronic health record-enabled, team-based, and scalable population health strategy based on the chronic care model. In: AMIA Annual Symposium Proceedings. Chicago, Illinois: American Medical Informatics Association; 2016
  65. 65. Savarino JR et al. Improving clinical remission rates in pediatric inflammatory bowel disease with previsit planning. BMJ Open Quality. 2016;5(1):u211063.w4361
  66. 66. Bose-Brill S et al. Validation of a novel electronic health record patient portal advance care planning delivery system. Journal of Medical Internet Research. 2018;20(6):e9203
  67. 67. Wald JS et al. Implementing practice-linked pre-visit electronic journals in primary care: Patient and physician use and satisfaction. Journal of the American Medical Informatics Association. 2010;17(5):502-506
  68. 68. Grant RW et al. Pre-visit prioritization for complex patients with diabetes: Randomized trial design and implementation within an integrated health care system. Contemporary Clinical Trials. 2016;47:196-201
  69. 69. Vo MT et al. Prompting patients with poorly controlled diabetes to identify visit priorities before primary care visits: A pragmatic cluster randomized trial. Journal of General Internal Medicine. 2019;34(6):831-838
  70. 70. Howard BJ, Sturner R. Use of an online clinical process support system as an aid to identification and Management of Developmental and Mental Health Problems. Current Developmental Disorders Reports. 2017;4(4):108-117
  71. 71. Contratto E et al. Physician order entry clerical support improves physician satisfaction and productivity. Southern Medical Journal. 2017;110(5):363-368
  72. 72. Lin S, Sattler A, Smith M. Retooling primary care in the COVID-19 era. In: Mayo Clinic Proceedings. Rochester, Minnesota: Elsevier; 2020
  73. 73. Noordman J, van der Weijden T, van Dulmen S. Communication-related behavior change techniques used in face-to-face lifestyle interventions in primary care: A systematic review of the literature. Patient Education and Counseling. 2012;89(2):227-244
  74. 74. Werner JJ et al. Comparing primary care physicians’ smoking cessation counseling techniques to motivational interviewing. Journal of Addiction Medicine. 2013;7(2):139
  75. 75. Vickers NJ. Animal communication: When i’m calling you, will you answer too? Current Biology. 2017;27(14):R713-R715
  76. 76. Barwick MA et al. Training health and mental health professionals in motivational interviewing: A systematic review. Children and Youth Services Review. 2012;34(9):1786-1795
  77. 77. Vasoya MM et al. Read MI: An innovative app to support training in motivational interviewing. Journal of Graduate Medical Education. 2019;11(3):344-346
  78. 78. Charles C, Gafni A, Whelan T. Shared decision-making in the medical encounter: What does it mean? (or it takes at least two to tango). Social Science & Medicine. 1997;44(5):681-692
  79. 79. Barry MJ, Edgman-Levitan S. Shared decision making—The pinnacle patient-centered care. The New England Journal of Medicine. 2012;366(9):780-781
  80. 80. Couët N et al. Assessments of the extent to which health-care providers involve patients in decision making: A systematic review of studies using the OPTION instrument. Health Expectations. 2015;18(4):542-561
  81. 81. Edwards M, Davies M, Edwards A. What are the external influences on information exchange and shared decision-making in healthcare consultations: A meta-synthesis of the literature. Patient Education and Counseling. 2009;75(1):37-52
  82. 82. Holmes-Rovner M et al. Implementing shared decision-making in routine practice: Barriers and opportunities. Health Expectations. 2000;3(3):182-191
  83. 83. Légaré F et al. Barriers and facilitators to implementing shared decision-making in clinical practice: Update of a systematic review of health professionals’ perceptions. Patient Education and Counseling. 2008;73(3):526-535
  84. 84. Triberti S, Durosini I, Pravettoni G. A “third wheel” effect in health decision making involving artificial entities: A psychological perspective. Frontiers in Public Health. 2020;8:117
  85. 85. Braun M et al. Primer on an ethics of AI-based decision support systems in the clinic. Journal of Medical Ethics. 2021;47(12):e3-e3
  86. 86. Hassan N et al. Clinicians’ and patients’ perceptions of the use of artificial intelligence decision aids to inform shared decision making: A systematic review. The Lancet. 2021;398:S80
  87. 87. Elwyn G et al. Shared decision making: A model for clinical practice. Journal of General Internal Medicine. 2012;27:1361-1367
  88. 88. Elwyn G et al. A three-talk model for shared decision making: Multistage consultation process. BMJ. 2017;359:j4891
  89. 89. Abbasgholizadeh Rahimi S et al. Application of artificial intelligence in shared decision making: Scoping review. JMIR Medical Informatics. 2022;10(8):e36199
  90. 90. Kokciyan N et al. A collaborative decision support tool for managing chronic conditions. In: The 17th World Congress of Medical and Health Informatics. London, UK: King’s College Open Access; 2019
  91. 91. Kökciyan N et al. Applying metalevel argumentation frameworks to support medical decision making. IEEE Intelligent Systems. 2021;36(2):64-71
  92. 92. Wang Y et al. A shared decision-making system for diabetes medication choice utilizing electronic health record data. IEEE Journal of Biomedical and Health Informatics. 2016;21(5):1280-1287

Written By

Thomas Wojda, Carlie Hoffman, Jeffrey Jackson, Tracey Conti and John Maier

Submitted: 27 February 2023 Reviewed: 01 April 2023 Published: 29 April 2023