Open access peer-reviewed chapter

Artificial Intelligence in Surgery, Surgical Subspecialties, and Related Disciplines

Written By

Ryan Yimeng Lee, Alyssa Imperatore Ziehm, Lauryn Ullrich and Stanislaw P. Stawicki

Submitted: 01 June 2023 Reviewed: 27 July 2023 Published: 06 September 2023

DOI: 10.5772/intechopen.112691

Chapter metrics overview

131 Chapter Downloads

View Full Metrics

Abstract

Artificial intelligence (AI) and machine learning (ML) algorithms show promise in revolutionizing many aspects of surgical care. ML algorithms may be used to improve radiologic diagnosis of disease and predict peri-, intra-, and postoperative complications in patients based on their vital signs and other clinical characteristics. Computer vision may improve laparoscopic and minimally invasive surgical education by identifying and tracking the surgeon’s movements and providing real-time performance feedback. Eventually, AI and ML may be used to perform operative interventions that were not previously possible (nanosurgery or endoluminal surgery) with the utilization of fully autonomous surgical robots. Overall, AI will impact every surgical subspecialty, and surgeons must be prepared to facilitate the use of this technology to optimize patient care. This chapter will review the applications of AI across different surgical disciplines, the risks and limitations associated with AI and ML, and the role surgeons will play in implementing this technology into their practice.

Keywords

  • artificial intelligence
  • machine learning
  • robotics
  • surgery
  • nanotechnology
  • nanosurgery
  • computer vision
  • autonomy

1. Introduction

Artificial intelligence (AI) and machine learning (ML) are rapidly transitioning from “experimental” into the “mainstream adoption” [1, 2, 3]. The current pace of progress appears to be accelerating, with an emerging number of potential applications of AI/ML in surgery and its various subspecialties [4]. These programs have shown promise in their capacity to process vast amounts of data, identify multivariate relationships within data, and reduce uncertainty of predictions to enable alternative options to certain tasks [5, 6]. Still, AI has not yet progressed to fully automating tasks due to certain limitations, such as the inability to understand common-sense scenarios, adjust to untrained circumstances, and make intuitive or ethical judgments—all necessary abilities required from a surgeon [7, 8, 9, 10]. These complementary strengths suggest that the role of AI may be optimized by collaborating with human intelligence [11]. However, this has not stopped scholarly discussions from imagining what increasingly practical considerations of AI might look like in the future, including concepts such as “autonomous actions in surgery” [12].

In this chapter, we will explore current and potential future applications of AI/ML in the sphere of surgery, surgical subspecialties, and related disciplines of medicine. Each section of this chapter will outline specific aspects where we believe AI may play a role within the context of surgical care delivery.

Advertisement

2. Methods

For the purposes of this narrative review, we performed an exhaustive literature search, with primary source platforms being Google™ Scholar and PubMed. The primary search term was “surgery” with the following secondary terms—“artificial intelligence,” “machine learning,” “technology,” and “subspecialty.” Specific names of surgical specialties (e.g., orthopedics, neurosurgery, and vascular surgery) were also employed. The primary search term “surgery” in combination with each of the other keywords, in various iterations, resulted in more than 875,000 potential listings. Literature screening focused on sources with “full text” availability, limited to English language. In addition, various correspondences (e.g., Letters to Editor and Brief Communications) were excluded. This resulted in approximately 142,000 secondary literature results. The search was limited to original research and reviews within this group, with at least five citations (using Google™ Scholar). With these criteria, our final list of potentially suitable articles was fewer than 2000. A more intensive review of the tertiary phase of our article screening resulted in 96 articles with relevance to this review. After this, secondary sources (derived during in-depth review of our 96 most relevant articles and examining their respective reference lists) were added. Utilizing the above methodology, the resultant reference list includes 158 citations (Figure 1).

Figure 1.

Flowchart of the selection process for review articles.

In the primary search, only studies with five or more citations were considered. Because newer studies tend to have fewer citations, this may introduce selection bias against newer studies that either address aspects of these concerns or bring up new ones. Given the rapidly evolving field of AI, future reviews could evaluate more novel studies for potential innovations.

2.1 Focused list of AI/ML applications across surgical specialties

A focused list of topics regarding the implication and application of AI and ML are presented below. AI is broadly defined as a system that can learn to think or act [13]. ML, which falls under the broad scope of AI, refers more specifically to an algorithm that adjusts itself based on detected patterns in data [13]. Deep learning is a subset of ML that uses neural networks to learn intricate relationships in data [14]. Each item will be presented briefly, with relevant literature sources provided accordingly. It is important to note that a complete review encompassing all applications of AI/ML and all specialties of surgery is beyond the scope of this chapter.

Advertisement

3. Perioperative risk assessment and surgical planning

Due to the ability to quickly and efficiently incorporate and compile large amounts of data, AI/ML paradigms are likely to be heavily involved in preoperative risk assessment in all fields of surgery. Through the collection of patient data and characteristics, such as weight, heart rate, blood pressure, comorbidities, and other factors, highly sophisticated models can be used in algorithms that predict the risk of the patient before undergoing a surgical procedure. With the ability to calculate risk, AI/ML may also bring the potential for appropriate mitigating strategies that could decrease patient morbidity and mortality [4, 15]. By utilizing large data sets organized by specific surgical procedures and procedure types, AI/ML-powered algorithms could be used to modify models that carry out statistical weight optimization for different variables associated with morbidity and/or mortality for each type of surgery, within a specific set of clinical circumstances (e.g., emergency versus nonemergency) or within a certain population (e.g., demographic). Assuming a representative sample, an effective AI/ML algorithm would allow surgeons and other perioperative medicine experts to input values for individual patients and return an objective preoperative risk assessment, leading to potential applications in precision medicine. For instance, there are multiple different bariatric surgeries available to patients, including sleeve gastrectomy, Roux-en-Y gastric bypass, adjustable gastric band, and biliopancreatic diversion [16]. Though sleeve gastrectomy is now the most common approach, each technique has trade-offs between cost, short-term morbidity, long-term morbidity, and long-term weight loss, and this can sometimes lead to complex decisions in choosing the optimal procedure [17]. Machine learning algorithms could help address this issue using preoperative data to provide individualized recommendations, potentially leading to more optimal bariatric surgery prescriptions [16]. Recent studies have investigated the use of similarly structured and implemented algorithms across many different types of surgeries and surgical challenges, from predicting preoperative risk of cardiac complications, identification of a difficult airway prior to intubation, and the general risk–benefit estimations of different procedural or surgical interventions [18, 19, 20, 21, 22]. When properly designed and implemented, such algorithms would allow for risk stratification and, thus, better preparation for adverse outcomes following surgery. Future improvements would increase the specificity and sensitivity of these algorithms, facilitating a more accurate prediction of perioperative risk. Additionally, AI algorithms may be able to provide quantitative predictions about outcomes with and without surgery, providing both surgeons and patients with the information for objective decision-making [23].

Additional preoperative risk assessment could take the form of dedicated ML analysis of the radiologic imaging [24]. Preoperative imaging is utilized before surgery to give surgeons more information about the patient’s pathology and anatomy and is essential for preoperative planning. ML algorithms can be used in the preoperative setting to predict prognoses and augment surgical decision-making across various surgical specialties [25, 26, 27]. An example of the implementation of preoperative ML models is the utilization of computed tomography (CT) scans to diagnose lung cancer. Using ML to evaluate CT scans has shown comparable to even better sensitivities and specificities compared to radiologists [28]. Such models can be further augmented to provide data about each identified tumor and suggestions for surgical planning [29]. More widespread adoption of ML algorithms that read imaging could lead to advancements in surgical planning in interventions such as lumbar decompression in spinal stenosis to assessing characteristics of corneal endothelium in specular microscopy for treatment of corneal edema [20, 30]. The utilization of ML algorithms could transform how surgeons interpret CT scans preoperatively and could, in return, improve patient care and surgical outcomes.

Advances in the algorithmic interpretation of medical imaging have led to the emergence of radiomics, a field involving the analysis of medical imaging to provide information about the physiology or pathology of the disease [31]. Radiomics contributes an additional layer to how ML algorithms can interpret medical imaging and has shown unique promise in surgical oncology, where minute changes in image features can be associated with various prognoses. Typical features used in radiomic workflow may include the intensity of signals and the distribution of these signals [32]. Because benign and malignant tumors have different microenvironments and expression of specific markers, magnetic resonance imaging (MRI) radiomics shows promise in being able to differentiate malignant or benign tumors from normal tissue [32]. Radiomics could therefore improve patient outcomes through early identification of disease.

In terms of specific examples, radiomics can be used to determine axillary lymph node (ALN) metastases in patients with breast cancer. The most common site of breast cancer metastasis is to the axillary lymph nodes (ALN). Early detection of ALN metastases can inform the surgical management of breast cancer [33]. Based on the Z0011 clinical trial results, the current diagnostic procedure for ALN metastases for most patients is sentinel lymph node biopsy (SLNB) [34]. Although this procedure is less invasive than ALN dissection, SLNB still carries the risk of lymphedema, axillary paresthesia, and reduced range of motion in the involved upper extremity [35]. Furthermore, in some cases, SLNB has been shown to have false negative rates in the range of 5–10% [36]. Thus, finding more effective alternative ways to identify ALN metastases is increasingly important. Radiomics has shown the ability to identify malignant tissues and determine ALN metastases at a higher rate than radiologists [37]. In the future, radiologists equipped with radiomics capabilities may be able to more efficiently and more accurately identify ALN metastases, leading to more prompt medical and surgical therapeutic interventions. Evidence suggests that radiomics may be able to differentiate between different subtypes of cancer based on the unique molecular profile and the resulting appearance on imaging of each subtype [38]. The ability to specifically diagnose different subtypes of cancer from their respective radiologic imaging characteristics may allow surgeons to stratify patient prognoses and better determine medical and surgical management (e.g., precision medicine/surgery).

Preoperative uses of ML and AI could also improve patient outcomes for those who are awaiting organ transplants. More specifically, ML algorithms trained to analyze patient characteristics, such as age, sex, severity of disease, hemodynamic measurements, and other variables, could be used to predict waitlist mortality and posttransplant outcomes [39]. These programs could be used to improve patient outcomes more broadly through a more objective management of organ transplant waitlists and recipient match optimization. ML algorithms may also be used in the future in more direct applications to transplant surgery. For instance, in liver transplantation, graft-weight-to-recipient-body-weight (GW/RW) ratios <0.8% are associated with an increased risk of complications such as small-for-size syndrome [40]. Consequently, the estimation of graft weight in living donors is important for limiting adverse outcomes associated with graft size mismatch. Studies have been conducted on the potential use of ML models trained on donor age, sex, body mass index, CT scans, and other data to estimate the donor graft weight [40]. These models have the potential to greatly enhance the precision of graft weight estimation, improving outcomes of liver transplantation. Additionally, experiences learned from hepatic transplantation may be suitable for adoption across other areas of organ transplantation (e.g., kidney, pancreas, heart, and lung), similarly reducing various potentially preventable complications, improving patient clinical outcomes, and maximizing effective utilization of organs (Table 1) [41].

SourceYear of publicationCountry of originSurgical disciplineStudied AI/ML algorithmsMajor findings relevant to this review
Hashimoto et al.2018USAAll disciplinesML surgical decision-makingAI in the form of ML, natural language processing, artificial neural networks, and computer vision has led to applications such as the detection of bleeding in tissue in video, analysis of Electronic Health Record (EHR) text, and predicting lung cancer staging based on diagnostic and therapeutic data
Loftus et al.2020USAAll disciplinesML surgical decision-makingML models may increase accuracy and reduce biases in surgical decision-making
Bihorac et al.2019USAMajor inpatient surgeriesML preoperative risk of complicationsML algorithm using EHR data could predict the risk of certain complications and of mortality at 1-, 3-, 6-, 12-, and 24 months after surgery (Areas under the curve (AUCs) of 0.82 and 0.94)
Zhou et al.2022ChinaThyroid surgeryML preoperative risk of complicationsML algorithm using preoperative patient data and neck circumference could predict difficult airway intubation (AUCs of 0.812 and 0.848)
Wilson et al.2021USAOrthopedic surgery, neurosurgeryML preoperative determination of surgery candidacyML algorithm using lumbar MRI scans could predict spinal surgery candidacy (Area under the curve (AUC) of 0.88)
Bellini et al.2021ItalyThoracic surgeryML preoperative risk of complicationsML models can evaluate preoperative data to provide individualized preoperative risk of outcomes after lung cancer resection and identification of pulmonary nodules
Malani et al.2023IndiaGynecologic surgeryML preoperative detection of diseaseML models can evaluate imaging to determine the presence of disease for surgical intervention
Shoham et al.2022IsraelDermatologic surgeryML preoperative prediction of surgery complexityML model using preoperative patient and tumor data can predict the complexity of surgical resection of nonmelanoma skin cancer (AUC of 0.79)
Bian et al.2023ChinaSurgical oncologyML analysis of imagingML radiomics model using CT scans can predict the presence of lymph node metastases in patients with pancreatic ductal adenocarcinoma with better accuracy than clinician alone (p < 0.001)
Etienne et al.2020FranceThoracic surgeryML analysis of imaging, preoperative risk assessmentMultiple ML models can identify the presence of malignant nodules using patient CT scans
Fairchild et al.2023USANeurosurgeryML analysis of imagingML model can identify the presence of difficult-to-detect brain metastases with 94% accuracy for prospectively diagnosed metastases and 80% accuracy for new metastases
Martin et al.2022USAOrthopedic surgeryML analysis of imaging, preoperative risk assessmentML algorithms can detect the presence of fractures and automate the calculation of measurements such as coronal knee alignment and acetabular component inclination and version
Savage2020USASurgical oncologyML analysis of imagingML algorithms can detect the presence of lung cancer at rates comparable to radiologists
Cui et al.2021ChinaSurgical oncologyML analysis of imagingML model can identify the presence of lung cancer nodules (76.0% accuracy with 0.004 false positives/scan when double-read) and provide information about number, coordinates, and suspicion of each nodule
Vigueras-Guillén et al.2020NetherlandsOphthalmologyML analysis of imagingML model can assess corneal endothelium density, coefficient of variation, and hexagonality using images from specular microscopy in 98.4% of specular images compared to 71.5% using previous software
Yu et al.2021ChinaSurgical oncologyML analysis of imaging, radiomicsML radiomics can predict the presence of axillary lymph node metastasis (AUCs of 0.88 and 0.87) and provide insight into tumor microenvironment (immune cells, methylation, and long noncoding RNAs (lncRNAs))
Chang et al.2021TaiwanNeurosurgeryML analysis of imaging, radiomicsML radiomics can predict molecular subgroups of medulloblastoma based on differing MRI profiles of each subgroup (AUCs of 0.82, 0.72, and 0.78)
Hsich et al.2019USATransplant surgeryML preoperative risk assessmentML model evaluated which variables have high importance in predicting heart transplant waitlist mortality, including glomerular filtration rate (GFR), serum albumin, and extracorporeal membrane oxygenation (ECMO) usage
Giglio et al.2023ItalyTransplant surgeryML preoperative surgical decision-makingML models trained on donor characteristics and CT scans can accurately predict liver donor graft weight to optimize donor-recipient matching with less errors than other methods (p < 0.001)
Gujio-Rubio et al.2020SpainTransplant surgeryML preoperative risk assessmentML algorithms for preoperative risk assessment show promise in liver, pancreas, kidney, heart, and lung transplantation

Table 1.

Summary of included studies on preoperative artificial intelligence/machine learning (AI/ML).

Advertisement

4. Intraoperative surgical decision-making

Although AL/ML-based algorithms and approaches can greatly improve patient outcomes during preoperative use, perhaps the most promising and powerful use of these programs is their ability to improve intraoperative care. Algorithms trained on patient vital signs, various biometric and non-biometric characteristics, electrocardiography (EKG), and other data points could be utilized to help facilitate real-time reduction of various intraoperative risks, including those of hypertension, hypoxemia, massive hemorrhage, and other complications [42, 43, 44]. Loftus et al. write that this comprehensive analysis of patient parameters using AI is especially important for more complex disease states, such as frailty [45]. Though frailty is a multifactorial disease state affected by physical, cognitive, and social variables, frailty is currently diagnosed by a few physical, often subjective criteria. For instance, the Fried frailty phenotype assesses patients based on their recent physical activity, subjective feelings of exhaustion, walking speed, handgrip strength, and unintentional weight loss. Diagnosing frailty can therefore be inconsistent, even though frailty is known to increase morbidity, mortality, and risk of other comorbidities that also increase surgical risk. Through expert-led ML training on large sets, algorithms could be developed to better classify complex disease systems such as frailty or sepsis and improve intraoperative risk assessment [45]. These outputs could further allow for augmented decision-making, or the advanced application of highly sophisticated models that are trained on multiple iterations of the same surgical procedure type. This, in turn, could provide decision-making assistance for surgical teams performing same-type operation based on the patient’s vital signs, procedural characteristics, the progression of the surgery, and various other potential characteristics [46]. For instance, if a machine learning model identifies that a certain constellation of parameters was associated with worse outcomes, it could potentially suggest that the surgical team addresses a specific aspect of patient care to improve the projected outcome, or perhaps to reduce various complication risks [4, 47, 48, 49, 50]. Komorowski et al. showed the possibility of this type of AI through an algorithm that was able to suggest optimal treatment and dosing options for sepsis patients leading to lower patient mortality than human clinicians alone [51].

Surgery often places high demands on surgeons’ cognition, creating an opportunity for ML/AI algorithms to reduce cognitive load and further identify ways to improve surgical outcomes [50, 52, 53].

4.1 Intraoperative pathology and histology determination

Clinical algorithms based on AI/ML have the potential to be highly helpful when healthcare professionals must quickly “make sense of” large amounts of aggregate/consolidated data, including text-based content [54, 55, 56]. One of the fields within the broader domain of “AI” that has gained particular interest in recent years is the so-called “computer vision” [57, 58]. Advancements in computer vision have been applied to object recognition, facial recognition, and action recognition, and potential applications of this technology in the area of surgery and related specialties are readily apparent [59]. This includes the use of AI to interpret radiologic imaging and a potentially important role in intraoperative histological analysis. The current procedure/workflow for intraoperative pathology in many oncologic surgeries involves the excision of a portion of the tumor, where the sample is then transported to the laboratory for preparation and interpretation by a pathologist. This process can take 20–30 min, prolonging the overall surgical procedure and also potentially delaying the diagnosis, where each additional step also contributes potential barriers to timely diagnosis [60]. Applications of “computer vision” could potentially address challenges associated with intraoperative interpretation of histology. Data are also emerging on the use of ML algorithms in analyzing images from Raman spectroscopy to identify malignant and benign tumors. The actual algorithm is functionally similar to the process used in radiologic analyses, but Raman spectroscopy imaging can be further processed to provide imaging more similar to hematoxylin and eosin (H&E) staining, which may better allow surgeons and pathologists to verify ML classifications of tissue samples [61]. Intraoperative pathology consultations are quite common in neurosurgical tumor procedures, breast cancer, hepatobiliary and pancreatic resections, lymph node dissections, and dermatopathology [62, 63, 64, 65, 66]. These procedures may also benefit from AI-aided streamlining of intraoperative histology and pathology in the future.

The use of computer vision algorithms in surgery can be further expanded to include the characterization of molecular tissue margins. When removing malignant tumors, patient outcomes are optimal with maximal resection of the tumor while sparing as much healthy tissue as possible. Positive margins, or cancerous cells that remain after incomplete resection, are associated with recurrence of cancer, leading to worse patient outcomes. Some estimates indicate that positive margins may be found in approximately 5% of liver and breast cancer resections, so identification of tumor margins is still a significant problem that must be addressed [67, 68]. As mentioned previously, Raman spectroscopy has already been used by pathologists to distinguish neoplastic and normal tissue based on differential Raman scattering, but future advancements could also lead to intraoperative Raman spectroscopy to determine tumor margins [69]. Like with other imaging modalities, computer vision algorithms in the future will be able to identify features such as positive margins. This could allow surgeons to identify tumor margins within the operating room without needing to wait for margins to be identified histologically, increasing efficiency and outcomes of tumor resection surgeries (Table 2).

SourceYear of publicationCountry of originSurgical disciplineStudied AI/ML algorithmsMajor findings relevant to this review
Hatib et al.2018USAAll surgical disciplinesML intraoperative risk assessmentML model was able to predict intraoperative hypotension from the analysis of perioperative arterial pressure waveforms (area under the curve (AUC of 0.95 15 min before hypotensive event)
Lundberg et al.2018USAAll surgical disciplinesML intraoperative risk assessmentML model was able to predict intraoperative hypoxemia from preoperative patient characteristics, real-time ventilation settings, anesthetic agents, etc.(AUC of 0.76 compared to that of 0.60 with anesthesiologist’s prediction)
Lee et al.2022KoreaAll surgical disciplinesML intraoperative risk assessmentML model using pre- and intraoperative parameters (arterial pressure waveforms, oxygen saturation, and ST segment elevation) was able to accurately predict intraoperative massive transfusion (AUC of 0.972 compared to that of 0.824 using the benchmark model)
Loftus et al.2019USAAll surgical disciplinesML intraoperative risk assessmentML algorithms will be useful for modeling complex disease states (such as frailty and sepsis) for a more accurate intraoperative risk assessment
Yang et al.2019USAAll surgical disciplinesML decision-makingML decision support tools may be able to provide clinical decision-making in all aspects of medicine
Pappada et al.2013USASurgical critical careML decision-makingThe ML model was able to predict glycemic trends in critically ill trauma and cardiothoracic surgery patients with 96.7% accuracy for normal glucose values and 53.6% accuracy for hyperglycemic episodes
Komorowski et al.2018UKSurgical critical careML decision-makingThe ML model was developed to recommend sepsis treatment strategy and dosage based on patient demographics, vital signs, laboratory values, medications received, etc., and patient mortality was the lowest when clinician treatments matched AI recommendations
Barth and Seamon2015USAAll surgical disciplinesML decision-makingSituational awareness is vital for patient safety, and AI may help reduce cognitive load to increase situational awareness
De Melo et al.2020USAAll surgical disciplinesML decision-makingVirtual assistants significantly decreased self-reported cognitive load in participants undergoing cognitively demanding tasks
Voulodimos et al.2018GreeceAll surgical disciplinesComputer visionRecent advancements in computer vision include object detection, face recognition, action recognition, and pose estimation
Hollon et al.2020USANeurosurgeryComputer visionComputer vision models can analyze Raman spectroscopic images to aid real-time intraoperative brain tumor diagnosis (overall accuracy of 94.6% compared to that of 93.9% with pathologist interpretation)
Orringer et al.2017USANeurosurgeryComputer visionComputer vision model can process Raman spectroscopy of brain tumor samples into simulated H&E staining and can be used to classify brain tumors (AUC of 0.984)
Daoust et al.2021CanadaSurgical oncologyComputer visionComputer vision model validated on porcine tissue can identify tissue margins based on Raman spectroscopy with accuracy of 0.990 and 0.967

Table 2.

Summary of included studies on intraoperative artificial intelligence/machine learning (AI/ML).

Advertisement

5. Enhancement of laparoscopic and minimally invasive surgery

In addition to aiding in tumor resections, computer vision is likely to impact many other aspects of surgery, especially with the increased integration of minimally invasive and robotic surgery [70]. Computer vision ML algorithms in the future may be able to process real time the videos taken during minimally invasive surgery (MIS) and robotic surgery, providing the surgeon with a broad array of additional, structured, and potentially actionable information. For example, computer vision algorithms may be useful in enhancing laparoscopic images. Given the anatomy of the abdomen, one issue common to an entire range of laparoscopic video signals is the quality of images. Nonuniform lighting, light-absorbing surfaces and substances (e.g., blood), along with other reasons for low endoscopic visibility, may lead to increased surgical risk and decreased efficiency in the operating room (OR) [71]. Because of these potential setbacks, computer vision algorithms may be able to process laparoscopic images in real time, digitally increasing lighting, removing vapor haze, and potentially filling in aspects of the image that may be obscured due to low visibility [72]. These applications have the potential to greatly improve ease-of-use of laparoscopes during surgery, decreasing the risk of incorrect targeting and decreasing the amount of time spent operating.

Further integration of computer vision in surgery could even lead to better identification of important anatomical landmarks in minimally invasive and robotic surgery. As mentioned previously, computer vision has already been used to identify objects in images and faces in security videos, and a logical extension of these uses would be the capacity to identify important surgical landmarks. For instance, rates of bile duct injury in laparoscopic cholecystectomies (LCs) have been seen to hover around 0.45–0.8% [73, 74]. One of the most common causes of bile duct injury in LCs is misidentification of the common bile duct for the cystic duct [75]. An ML model trained on imaging data from laparoscopic surgeries was developed to identify critical anatomy in LCs in video with near-human accuracy, potentially leading to reduced risk of bile duct injury in LC in the future [76]. The largest challenge in building a model for this use would be the requirement for labeled video information. More specifically, any actionable model would need to be trained on many videos of laparoscopic surgeries in which the cystic duct is pre-identified in each of the thousands of frames within each training video. This formidable task is further complicated by the natural anatomical variations in human anatomy, necessitating the need for an even larger test data set of “normal variants” that can be encountered in the OR. Despite current limitations, it is likely only a matter of time before high fidelity models can be created, with significant resultant downstream benefits.

Of importance, AI/ML may also play a role as a component of augmented reality (AR) in surgery [77, 78]. One example with relatively mature application of AR is the area of spine surgeries, such as using the XVision Spine System (Augmedics, Arlington Heights, IL, USA) [79]. In this instance, AR-guided surgery works by using CT or MRI imaging to develop a three-dimensional (3D) model, then employing the AR program to overlay the model on the patient using AR glasses or other image projection modalities. Though this is a relatively new technology, initial studies investigating the use of AR systems in cadaveric pedicle screw placement indicate an absolute increase of accuracy from 88% (via fluoroscopy) to 94% (via AR guidance) [80]. In the immediate future, AR implementations will most likely be concentrated in orthopedic surgery and neurosurgery due to the relative immobility of bones and the spine compared to visceral organs. However, the potential increased use of peri- and intraoperative imaging in abdominal and thoracic surgeries may increase the viability of AR guidance in other operation theaters [81, 82].

5.1 Surgical education

Perhaps, the most significant benefit of AR in surgery is in medical education. Head-mounted devices used in AR have already proven useful in various aspects of medical education, including anatomy and surgery [83]. In the near future, AR may allow surgeons to practice various procedures anywhere in a low-stakes environment and decrease cognitive effort, allowing for a more sustained practice [84]. AR may eventually be used within the operating room as a teaching tool, allowing surgeons to manipulate personalized models of the patient’s organs based on some of the techniques described previously. Thus, AR may become a valuable supplemental tool to train future surgeons and other specialists who want to practice procedures.

Machine Learning algorithms may play other essential roles in surgical education. Aspiring surgeons start their training with varying degrees of motor skill and learning abilities, with the use of ML algorithms in the future, students may be able to be classified based on generated learning curves. Gao et al. were able to analyze the proficiency of students performing various surgical tasks using an algorithm to predict the number of trials needed for each student to proficiently complete the task [85]. Similar algorithms in the future may be applied to planning surgical resources for students based on the need to optimize learning for all students within a surgical program. Other ML programs may be able to provide feedback to learners about specific skills. For instance, surgical skill is an important factor in patient outcomes, directly preventing complications and indirectly in mediating other elements such as the length of surgery [86]. Thus, measuring and improving surgical skills is important in improving patient care. However, there is a lack of practical objective assessments of surgical skill and dexterity. Currently, many assessments of surgical skills are subjective in nature [87]. AI algorithms may be able to address these concerns.

Video-based learning remains a promising learning method for surgical residents [88]. However, video-based review can be limited by having to parse through long videos, especially when reviewing multiple examples. Hashimoto et al. show that it is possible to develop a computer vision model capable of accurately identifying distinct phases of a surgery [89]. This technology allows surgeons to quickly find specific stages of an operation for more efficient review, and similar AI models have been validated in other types of surgeries as well [90]. While out of the scope of these studies, these models could be supplemented with AI that directly analyzes the surgeon’s skills. For instance, an algorithm could be created to rate surgical motion economy within the operation theater, and by proxy surgical skill [91]. Using videos of surgeons performing the same procedure, the algorithm may be able to provide objective feedback on the motion economy and path length compared to other surgeons in a video database. AI programs that combine surgical phase recognition and surgical skill analysis could be used to indicate certain stages of the procedure where the surgeon could improve motion economy. Surgeons, especially those in training, may not be completely aware of unnecessary movements they are making during surgery, and these algorithms could provide an objective way to compare and teach motion economy. AI algorithms may be applied to similar measures, such as fluidity of motion, force application in laparoscopic surgery, or a combination of these factors. In the future, these algorithms may provide objective insight into surgical skills and dexterity, allowing for targeted practice of specific skills (Table 3).

SourceYear of publicationCountry of originSurgical disciplineStudied AI/ML algorithmsMajor findings relevant to this review
Kumar et al.2015USAMinimally invasive surgeryComputer visionComputer vision algorithms, especially with growing usage of surgical robots, may be used to decrease cognitive load through identification of intraoperative phases and segmentation of objects and people within the surgical theater
Xia et al.2022CanadaMinimally invasive surgeryComputer visionComputer vision algorithm can enhance and refine laparoscopic images to optimize vision in occluded regions of the abdominal cavity
Ruiz-Fernandez et al.2020SpainMinimally invasive surgeryComputer visionComputer vision application was able to process imaging from laparoscopic surgeries to remove water vapor haze and improve visibility in dark areas
Owen et al.2022UKMinimally invasive surgeryComputer visionComputer vision algorithm developed to identify critical structures in laparoscopic surgeries 65–75% accuracy (compared to 70% baseline). Labels were verified by three expert surgeons afterward
Qian et al.2019USAAll surgical disciplinesAugmented realityAugmented reality could innovate surgery in several ways, including surgical guidance during laparoscopic surgeries, overlay of tumor margins, feedback of distance between instrument and anatomical structures, and the planning of port placement
Gorpas et al.2019USASurgical oncologyAugmented realityAugmented reality program can overlay fluorescence data within the da Vinci surgical robot for real-time identification of normal and malignant tissue
Peh et al.2020GermanySpine surgeryAugmented realityAugmented reality surgical navigation showed improved accuracy of thoracic and lumbar pedicle screw placement in cadavers compared to standard fluoroscopy-guided pedicle placement (94% vs. 88%)
Soler et al.2004FranceAbdominal surgeryAugmented realityAugmented reality shows promise in digestive surgery through 3D modeling of abdominal structures, overlay visualizations during operations, and planning of needle targeting
Rad et al.2022Thoracic surgeryAugmented realityAugmented reality may be used in thoracic surgery to improve surgical training, enhance planning through visualization of structures, and provide visual assistance during surgery
Peden et al.2016UKSurgical educationAugmented realityAugmented reality in suturing skill development in suturing-naïve students has been shown to be more enjoyable than conventional learning with comparable skill development
Barteit et al.2021GermanySurgical educationAugmented realityAugmented and virtual reality surgical simulations of sleeve gastrectomy led to subjective decreased cognitive effort and decreased stress
Gao et al.2020USASurgical educationMLML model trained on initial completion times of suturing-naïve medical students was able to predict the number of trials needed for proficiency
Hashimoto et al.2019USASurgical educationComputer visionComputer vision algorithm can identify the specific phase of laparoscopic sleeve gastrectomy with over 85% accuracy
Garrow et al.2021GermanySurgical educationComputer visionComputer vision algorithms have shown the ability to identify the specific phase of various surgeries including sleeve gastrectomy, laparoscopic cholecystectomy, and colorectal surgery
Azari et al.2019USASurgical educationComputer visionComputer vision data for tracking surgeon hand movements during surgery were used to train an ML model for evaluating surgical skill, with measures of motion economy being most precise (R2 = 0.64)

Table 3.

Summary of included studies on computer vision and augmented reality (AR).

Advertisement

6. Postoperative risk assessment

The use of ML and AI in postoperative risk assessment would work similar to peri- and intraoperative risk assessment using patient vital signs and characteristics. After performing a surgery, the surgeon must be able to triage patients by likelihood of postoperative complications. Improperly triaged high-risk patients may be sent to hospital floors where there is a high patient-to-clinician ratio, which can limit the frequency of patient assessments and lead to higher rates of morbidity and mortality [92]. Loftus et al. were able to develop an AI model capable of using pre- and perioperative labs and vital signs, intraoperative anesthesia variables (such as intraoperative high inspired oxygen fraction (FIo2)), and postoperative evaluations (including scheduled postop location) to identify undertriaged patients at risk of postoperative complication [92]. In the future, similar technology could be integrated into the electronic health record and send mobile alerts to physicians, allowing for quicker alterations to patient care [93]. Because postoperative risk assessments may utilize more complete information, they have been shown to provide a more accurate prediction of postsurgical prognoses and complications [94, 95].

Machine learning models for postoperative care will also be better suited for predicting pain management needs of the surgical patient. Opiates are common medications prescribed for postoperative pain. However, the opioid epidemic affects over 3 million people in the USA, and it is estimated that 500,000 people in the USA are dependent on opiates [96]. Physicians are now much more aware of the risks of opioid addiction; therefore, opioid dependence and abuse are important considerations to make when prescribing opioids for postoperative pain. A few studies have investigated the use of ML to predict long-term opioid use. One study developed a model to predict long-term opioid use, defined as opioid prescriptions that were requested in addition to the original prescription, in patients who underwent elective hip arthroplasty. Internal validation indicated that the model had good predictive value for the testing cohorts in the study [97]. Other studies have looked at the use of similar algorithms in breast cancer surgery, anterior cruciate ligament (ACL) reconstruction, and joint arthroplasty [98, 99, 100]. While these studies did not utilize external validation, these proof-of-concept studies indicate that ML in the future may have utility in predicting long-term opioid use, allowing for more informed prescription of pain medications and potentially earlier identification of patients at risk for opioid dependence.

Machine learning algorithms may also be used for gait analysis in postoperative care. For most elective joint surgeries, postoperative assessment involves patient-reported outcome measures or performance-based metrics like the range of motion and mobility [101]. These assessment methods may introduce bias through subjective ratings of outcomes measures by the patient or through biased ratings of performance metrics by physicians [101]. Gait analysis using ML may be able to provide ancillary objective analysis of postsurgical outcomes. One study showed that an ML model incorporating walking speed, gait cycle, maximum force of a step, and other biomechanical variables was able to separate patients who had total knee arthroplasty with patients who underwent unicompartmental knee arthroplasty [102]. Other studies have shown similar potential in total knee arthroplasty and ACL reconstruction [103, 104]. Furthermore, computer vision can likely be leveraged to increase the power of these models. Currently, there exist programs that allow users to mark parts of the body in videos, such as the knees and elbows, and follow the motion of these structures throughout the video. However, manual input of data is time-consuming and prone to human error. To alleviate these concerns, multiple markerless models have been developed to map out patient gait, tracking the movement of anatomical structures such as the ankles, knees, hips, shoulders, head, and arms that do not require human input [105, 106, 107]. Based on gait estimation from video, future ML algorithms may be able to stratify patients based on how well they will regain function following surgery. Algorithms may also be able to identify which patients might experience recurring issues or may be at higher risk of falls based on their gait (Table 4) [108].

SourceYear of publicationCountry of originSurgical disciplineStudied AI/ML algorithmsMajor findings relevant to this review
Loftus et al.2021USASurgical critical careML postoperative risk assessmentML algorithms trained on pre- and intraoperative patient data extracted from the hospital Electronic Health Record (EHR) were used to develop a model that could accurately identify critically ill patients who were undertriaged (Area under the receiver operating characteristic curve (AUROC) of 0.92)
Ren et al.2022USASurgical critical careML postoperative risk assessmentML algorithm trained on real-time perioperative data extracted from hospital EHR could predict and alert physicians about categorized postoperative complications (AUC between 0.78 and 0.89 depending on complication predicted)
Shahian et al.2012USACardiac surgeryML postoperative risk assessmentML models trained on data combining clinical and administrative data allowed for the analysis of perioperative and long-term postoperative data for accurate prediction of survival up to 2500 days post-CABG
Forte et al.2022NetherlandsCardiac surgeryML postoperative risk assessmentML models implementing postoperative data were more accurately able to predict 30-day and 1-year mortality compared to models using just preoperative data (AUCs of 0.75 and 0.79 using pre- and postoperative data vs. areas under the curve (AUCs) of 0.70 and 0.69 using preoperative data only)
Kunze et al.2021USAOrthopedic surgeryML postoperative risk assessmentML models trained preoperative data, including Harris hip score, age, body mass index (BMI), etc., were able to predict prolonged opioid use in patients after hip arthroscopy (AUC of 0.75)
Lötsch et al.2018GermanySurgical oncologyML postoperative risk assessmentML models trained on clinical and psychological data (such as subjective answers to pain perception surveys) were able to accurately exclude the possibility of persistent pain (95% accuracy) following breast cancer surgery, although it was unable to predict patients who would experience persistent pain
Anderson et al.2020USAOrthopedic surgeryML postoperative risk assessmentML model trained on preoperative demographic data, military employment data (such as rank and time deployed), and prescription data was able to predict patients at risk of long-term opioid use (AUC of 0.76) following ACL reconstruction surgery
Gabriel et al.2022USAOrthopedic surgeryML postoperative risk assessmentML model trained on patient demographic data, comorbidities, and perioperative data (such as postoperative day 1 (POD1) morphine equivalents) was able to predict long-term opioid use (up to AUC of 0.94 with balanced bagging classifier)
Kokkotis et al.2022GreeceOrthopedic surgeryML postoperative risk assessmentML algorithms may be able to provide insight into gait and postoperative outcomes following total knee arthroplasty and ACL surgeries through the use of biomechanical measurements
Jones et al.2016UKOrthopedic surgeryML postoperative risk assessmentML algorithm using biomechanical measurements was able to differentiate between patients who underwent total knee arthroplasty and unicompartmental knee arthroplasty, and who had gaits much more similar to healthy patients
Martins et al.2015PortugalOrthopedic surgeryML postoperative risk assessmentML model was used to determine gait differences based on three different assistive devices after total knee arthroscopy, allowing for the classification of the type of assistive device used
Kokkotis et al.2022GreeceOrthopedic surgeryML postoperative risk assessmentML model trained on ground reaction forces and biometric data allowed for the classification of ACL-deficient, ACL-reconstructed, and healthy patients with accuracy of up to 94.95%
Cao et al.2017USAOrthopedic surgeryComputer visionConvolutional neural network was implemented to create a program that could estimate human poses even with occlusion of feet or arms during motion
Chen et al.2022ChinaOrthopedic surgeryComputer visionML models could classify the type of gait based on computer vision-aided anatomical markers and calculations with up to 98% accuracy
Moro et al.2022ItalyOrthopedic surgeryComputer visionComputer vision algorithm allows for automated gait analysis with biomechanical measurements comparable to manually marked video
Ng et al.2020CanadaOrthopedic surgeryComputer visionComputer vision-aided models trained on human pose estimation and gait variables identified cadence, average margin of stability, and minimum margin of stability as factors significantly associated with falls during the study

Table 4.

Summary of included studies on postoperative artificial intelligence/machine learning (AI/ML).

Advertisement

7. Autonomous robots and artificial intelligence

While the aforementioned applications of AI/ML will greatly enhance surgical outcomes, the most impactful applications of AI will involve the development of autonomous robots that will be able to apply and expand on these algorithms. Robotic autonomy can be categorized based on the need for human involvement in robot function. Within this proposed scale: a 0 denotes a machine that has no inherent autonomy and is rather completely controlled by the operator, a 1 represents a robot that the operator controls but provides some degree of assistance, and 2–5 represent varying levels of autonomy; a 5 represents “true autonomy” of the machine without need for human intervention [109]. Currently, most surgical machines score at level 0 or 1, with machines such as the da Vinci surgical system and robotic endoscopic systems falling squarely in these categories [110]. Applications of level 2 automated robots, such as performing autonomous suturing, have been described [111]. At the current stage, automatons are limited to the autonomy of simple tasks, though there is a push to develop machines that may autonomously perform more complex tasks. Some experiments using phantom tissue have shown success using autonomous robots to ablate abnormal tissue or perform anastomosis of the small bowel, but these experiments were performed on phantom tissue in idealized experimental settings with low trial numbers [112]. Still, these proof-of-concept experiments show that higher-level autonomous robots might emerge sooner rather than later. These complex autonomous robots would integrate multiple sensory modalities, from computer vision to tactile sensation to proprioceptive or auditory information [113].

As AI gets more complicated, the process of training also becomes increasingly complex. Three main learning methods exist for visual-based learning for artificial intelligence: imitation learning, reinforcement learning, and transfer learning [114]. Imitation learning is a method of learning involving the observation of an expert performing the task. Based on the observed actions, the algorithm updates its knowledge (also known as policy) to be more like the demonstration [115]. In an ideal environment, imitation learning will lead to the most reproducible behavior [116]. The use of imitation learning in surgery is limited because of its inability to generalize behaviors. When environments are dissimilar to the demonstration environments, such as differing orientation of visceral organs or working with anatomical variations, the performance of imitation learning algorithms will be suboptimal [116]. This can be alleviated somewhat by dividing the imitation task into subtasks and training subtasks depending on starting circumstances. However, generalizability is still lower than in the other learning methods [115].

Reinforcement learning is another type of learning that is used in AI. This method of learning involves trial-and-error, where the agent performs its task and updates its actions based on the outcomes of its actions. An example of reinforcement learning is the training of the chess engine AlphaZero, in which the engine played many simulated games with itself and improved its playing ability based on the outcomes of each game [117]. Reinforcement learning is a powerful tool that is better able to generalize behaviors compared to imitation algorithms, but reinforcement requires many trials to optimize performance. Additionally, training a model in a real surgical environment is dangerous.

Fortunately, AI flaws can be circumvented via transfer learning, which essentially involves the agent learning through reinforcement learning in a simulated environment and transferring its knowledge to a real environment [114]. Using the simulation, the agent can quickly be trained on many trials before being transferred to real circumstances. Issues for transfer learning are readily apparent; when there is discordance between simulation and real environments, the performance of the model will be suboptimal. A few methods have been proposed to improve transfer learning outcomes. One method is simply improving the quality of the simulation. Computational simulations are much more efficient than physical manipulations of simulated environments, and improvements in computational power are enhancing virtual simulation environments to better model the real world. Other methods involve changing the policies of the agents to better adapt to circumstances that were not seen during simulation training. One proposed system involves the learning of multiple skill latents in simulation. Broadly defined, “skill latents” represent prelearned or predetermined “primitive skills” which can be subsequently combined within a “model-predictive control” environment to perform more complex tasks [118]. These skill latents can then be accessed and simulated in real time when situations arise that have not been seen before, and the skill latents that produce the optimal effect can be chosen for the agent’s actions [118]. Instead of perfectly modeling the real world, this approach tries to make the AI’s learning as flexible as possible and/or applicable. Because transfer learning models can be trained in simulation, and because these models can be adaptive, it is likely that autonomous surgical robots in the near future will use transfer learning models to navigate the surgical field (Table 5).

SourceYear of publicationCountry of originSurgical disciplineStudied AI/ML algorithmsMajor findings relevant to this Review
Shademan et al.2016USAAll surgical disciplinesAutomationAn autonomous robot using computer vision and an automated suturing algorithm was able to perform suturing tasks on ex vivo and living porcine tissue
Hu et al.2018USANeurosurgeryAutomationAutonomous robot using computer vision algorithms was able to create a 3D reconstruction of the surgical cavity and successfully perform robotic ablation of a surgical phantom in seven out of ten trials
Tapia et al.2020SwitzerlandAll surgical disciplinesAutomationA proprioceptive liquid-metal stretch sensor was able to reconstruct deformation of soft actuators in real time
Hua et al.2021ChinaAll surgical disciplinesAutomationDeep reinforcement learning, imitation learning, and transfer learning are the main methods to teach autonomous robots
Rivera et al.2022USAAll surgical disciplinesAutomationMachine learning through primitive imitation led to increased performance compared to other learning algorithms in two different primitive tasks
Kumar et al.2022USAAll surgical disciplinesAutomationThough imitation learning algorithms are very powerful in ideal settings, reinforcement learning is more optimal when there is sufficient noise in the data set for various different learning policies and tasks
Silver et al.2018USAAll surgical disciplinesAutomationReinforcement learning algorithm was used to create a program capable of learning and optimizing performance in chess, shogi, and Go
He et al.2018All surgical disciplesAutomationTransfer learning algorithms involving prelearned skill latents could be successfully applied to complete new tasks (such as drawing and pushing an object)

Table 5.

Summary of included studies on autonomous robots.

Advertisement

8. Nanotechnology

One technological field that is gaining increased interest in recent years is nanotechnology. Nanotechnology refers to devices or machines on the scale of microns and encompasses a wide range of technologies, including nanosensors, nanoparticles, and nanobots [119]. Nanotechnology opens doors to new therapeutics for a variety of reasons. Most obviously, the size of these devices allows access to previously inaccessible spaces. Due to the nanoscale size of these machines, they have higher surface area-to-volume ratios, leading to increased reactivity, and quantum effects play a larger role in interactions compared to macroscale sizes [120]. While nanotechnology does not necessarily need to involve artificial intelligence, these two fields may work synergistically to help surgeons in the future provide interventions not previously possible.

Because “nano-machines” operate on a scale much smaller than conventional robots, nanotechnology can allow for better and more selective delivery of drugs, such as chemotherapy agents. For instance, nanoparticle capsules may protect agents from enzymatic degradation or unfavorable pH environments or allow drugs to cross the blood–brain barrier [121, 122]. Additionally, one of the most powerful aspects of nanotechnology is the increased specificity of drug delivery targeting. Attaching specific moieties to nanoparticles can allow for targeted binding and release of encapsulated contents [123]. This application has implications in cancer treatment. Although chemotherapeutic agents are useful in treating cancer, these drugs often cause a wide range of adverse effects due to systemic distribution of these drugs. Various nanoparticle vessels, including nanocrystals, liposomes, and carbon nanotubes, can be fitted with surface coatings allowing cell-specific delivery of cancer therapies, ultimately reducing side effects [121, 124, 125]. AI may further increase the specificity of nanoparticle drug delivery through analysis of patterns of biomarkers. Through the integration of AI in biomarker sensing, the presence of different groups and concentrations of certain biomarkers can allow for classification of disease type and stage, enabling targeted and modifiable release of drugs from nanocapsules [126]. The selectivity of nanoparticles can also be leveraged for targeted ablation therapy for certain cancers. For instance, synthetic high-density lipoprotein nanoparticles were used to facilitate the delivery of photothermal ablative agents to hepatocellular carcinoma cells in mouse models, reducing tumor burden and stimulating local immune response [127]. Similar technologies could be applied to other ablation techniques, including radiation, cryoablation, and electroporation, in a wide variety of cancers [128].

Besides use in surgical oncology, nanotechnology may allow surgeons to operate on a nanoscale. Atomic force microscopy (AFM) may be an integral part of nanosurgery in the future. At its core, AFM consists of a microscopic cantilever fitted with a tip along with a laser and photodetector. As the tip of the AFM traverses along a surface, such as tissue, changes in the surface will move the tip and cause deflections of the laser, which can be detected by the photodetector [129]. The use of AFM enables the detection of several angstroms of change [129]. Furthermore, the force applied by the tip to the surface can be used to touch, push, and cut the surface, providing the ability to manipulate membranes, proteins, and DNA [130, 131, 132]. Some experiments show the viability of using AFM to alter cell morphology and puncture cell membranes of individual cells [133]. Other uses of AFM in the future include signaling pathway identification, targeted drug delivery using specialized AFM tips, and disruption of cellular connections, such as dendrites, without interfering with cell bodies [130, 134]. Other potential “nano-machines” are limited only by human creativity and may include nanopropellors, nanowires, and “nanograbbers” (microscopic machines created by Leong et al. capable of performing in vitro biopsies) [134, 135].

Besides the direct manipulation of tissue, nanotechnology also makes possible a wide range of other surgeries. For instance, nanotechnology may increase the feasibility of islet transplantation in diabetes. While the results from the Edmonton protocol show that islet transplantation has promise in long-term glycemic control in type 1 diabetes, practicality of islet transplantation was limited by immune response against exogenous islet cells, causing gradual loss of islet function [136]. These concerns could be addressed by encapsulating islet cells with nanoparticles, with several approaches having been investigated to decrease immunogenicity of exogenous compounds [137, 138, 139]. Thus, alongside improving drug delivery, nanoparticle capsules may also be used to shield contents and suppress immune response.

Finally, nanoparticles may play roles in facilitating hemostasis and preventing infection after surgery. Many different hemostatic nanomaterials, such as mesoporous xerogels, polyphosphate-bound gold colloids, titanium dioxide (TiO2) nanotubes, and many others, have peen proposed [140]. While additional properties of each nanomaterial differ, they are thought to function by providing scaffolding for coagulation factors [140]. Antimicrobial nanoparticles may also be used for infection control in surgery. Postoperative infection carries a high rate of morbidity. An estimated 11% of deaths in the intensive care unit (ICU) resulted from surgical site infections [141]. Because of this need, antimicrobial nanoparticles may be able to address postsurgical infection risk. Silver nanoparticles have shown promise in accumulating within bacteria and disrupting various cellular processes, such as DNA replication and protein translation [142]. Silver nanoparticles have the potential to improve infection control, especially in orthopedic surgery. Orthopedic implants are susceptible to colonization of biofilm-forming bacteria, which can lead to high risk of morbidities [143]. One concern is the dose-dependent toxicity on human tissue attributable to silver nanoparticle use [144]. However, studies have indicated that osteocytes may be more resilient to this specific type of toxicity. Though silver nanoparticles initially decrease Saos-2 (human osteosarcoma cell line) survivability, Saos-2 cells seem to adapt to silver nanoparticle exposure over the course of 35 days in vitro [143]. Given these findings, it is possible that silver nanoparticles may be used to coat orthopedic implants that reduce the effect of osteoblast function (Table 6).

SourceYear of publicationCountry of originSurgical disciplineStudied AI/ML algorithmsMajor findings relevant to this review
Roduner2006GermanyAll surgical disciplinesNanotechnologyNanorobots have unique properties due to their microscopic size, including increased surface area-to-volume ratios and increased strength of quantum effects
Hofferberth et al.2016USAThoracic surgeryNanotechnologyNanotechnology may have numerous uses in thoracic surgery, such as nanoparticles mapping lymphatic drainage of malignant tumors, targeting tumor cells for drug delivery, and selective cell ablation
Krůpa et al.2014Czech RepublicNeurosurgeryNanotechnologyVarious nanotechnologies have shown promise in transporting drugs across the blood–brain barrier, allowing for targeted delivery into brain tumors
Zhang et al.2013ChinaSurgical oncologyNanotechnologyNanotechnology may be able to improve cancer care through encapsulated chemotherapy drugs, allowing for targeted distribution. Nanoparticles may also be able to increase intracellular accumulation of drugs within cancer cells
Xu et al.2021ChinaSurgical oncology, urologyNanotechnologyNanotechnology may be able to improve bladder cancer care through targeted intravesical delivery of various drugs
Khawaja2011PakistanNeurosurgeryNanotechnologyNanotechnology may improve glioblastoma multiforme outcomes through targeted chemotherapy delivery, thermo- and photo-therapy, and surgical nanorobots
Adir et al.2020IsraelSurgical oncologyNanotechnology, MLML algorithms can be used to analyze complexes of biomarkers to classify various cellular disease states, allowing for targeted delivery of drugs via nanotechnology
Wang et al.2021ChinaSurgical oncologyNanotechnologyNanoplatforms may be able to improve the delivery of cancer drugs as seen in multiple studies
Binnig et al.1986USAAll surgical disciplinesNanotechnologyAtomic force microscope that could measure vertical displacement of the cantilever tip less than 1 Å was developed
Song et al.2012USAAll surgical disciplinesNanotechnologyA modified atomic force microscope setup that would allow for mechanical manipulation of cellular samples, with possible applications to separating cellular junctions, was created
Li et al.2005USAAll surgical specialtiesNanotechnologyA modified atomic force microscope attached with specific antibodies was used to recognize cellular receptors and provide augmented reality feedback to the user, allowing for nanomanipulation of the sample
Wen and Goh2004CanadaAll surgical specialtiesNanotechnologyAtomic force microscopy was able to incise a single collagen fibril
Yang et al.2015USAAll surgical specialtiesNanotechnologyAtomic force microscopy was used to penetrate fixed HaCaT cell membranes and disrupt intermediate filaments, leading to decreased intercellular connections
Brodie and Vasdev2018UKAll surgical specialtiesNanotechnologyNanomachines, such as micropipettes to cleave dendritic connections or “micrograbbers” to biopsy-specific cells, may innovate nanosurgery in the future
Leong et al.2009USAAll surgical specialtiesNanotechnologyA tetherless, temperature-activated microgripper 190 μm when closed was able to take biopsy samples from ex vivo tissue samples
Im et al.2012South KoreaSurgical oncologyNanotechnologyCoating rat allotransplanted islet cells with nanolayer shielding almost doubled survival against immune response (6.8 days vs. 3.6 days)
Park et al.2018South KoreaSurgical oncologyNanotechnologyNanolayer shielding of allotransplanted islet cells was validated in monkey models, with heparin nanoshielded islet grafts surviving average of 108 days vs. 68.5 days in the control
Izadi et al.2018IranSurgical oncologyNanotechnologyNanolayer shielding of mouse islet cells with poly(ethylene glycol) was conjugated with Jagged-1 (JAG-1), which led to significant reduction in fasting blood glucose (p < 0.01)
Sun et al.2018ChinaOrthopedic surgeryNanotechnologyNanotechnology has enabled the development of many different kinds of synthetic hemostatic materials, including silica-based xerogels, self-assembled peptides, ethylene/propylene oxide gels, TiO2 nanotubes, polyphosphate gold colloids, and others
Rai et al.2012IndiaAll surgical disciplinesNanotechnologySilver nanoparticles have been shown in various studies to have broad-spectrum antimicrobial effects through disruption of various cellular processes
Castiglioni et al.2017ItalyOrthopedic surgeryNanotechnologyHigh concentrations of silver nanoparticles initially reduced Saos-2 osteogenic cell numbers, but this reduction decreased over 35 days without impairing cellular differentiation

Table 6.

Summary of included studies on nanotechnology.

Advertisement

9. Limitations and concerns

Though AI shows great promise in changing many aspects of medical and surgical care, it is important to highlight the limitations of this technology. The construction of ML algorithms is reliant on large amounts of data to create generalizable algorithms that limit unnecessary data within the data set [145]. The classification of ML model algorithms can identify tumors from imaging. Both training and test data sets still require annotation, manpower, and time [12, 146]. These factors limit how quickly these algorithms can be generated. Additionally, ML algorithms identify patterns from input data without interpretation or critical analysis and may be prone to biases within the data set. There often exist biases in who participates in clinical trials, and this may lead to outputs that disproportionately segregate minorities and other groups which are not as well represented in the training data for the ML model [147, 148]. In some cases, minute changes or fluctuations in the input data can drastically affect the model field output [146]. In the same vein, poor data, such as poor video or image quality, can have deleterious effects on the quality of the model [149]. Because of this, standardization of imagining techniques and video characteristics is vital for model efficacy [146]. Verifying the integrity of these models is integral to maintaining patient autonomy. Faulty or biased recommendations made by AI models can affect a patient’s ability to provide informed consent for their care [150]. Finally, there may be a risk for “adversarial attacks,” defined as data inputted in the training set with the intention of biasing outputs [151]. Notably, potential methods for adversarial attacks have been identified for every type of machine learning model and may be as overt as modifying input data or as seemingly innocuous as rotating an image slightly [151, 152]. There may be many reasons for adversarial data input, from fraudulent reimbursement to altering research outcomes, so it is vital that methods are implemented to prevent intentional and unintentional biases in these models.

Ethical concerns surrounding the use of AI center around oversight and liability. It is important that AI is tested and verified before actual clinical use, but there are currently no governing body and no approval process for reviewing ML algorithms in clinical care, let alone for autonomous surgery [12]. This is especially important because of the “black-box” effect, which is especially prevalent in deep learning algorithms. Due to the existence of “hidden” layers in deep learning neural networks, it is often not entirely clear how the AI model arrives at its output, and this can limit how much trust physicians and patients put in the recommendations made by these algorithms [153]. Without entities to review these algorithms, AI will remain primarily experimental. There are many legal concerns regarding the use of AI in surgery. One of the most prominent concerns among physicians is liability [154, 155]. Currently, there is essentially no case law on the legality of AI in clinical settings [155]. Therefore, legal entities must establish how malpractice and liability are handled if complications occur because of the use of AI. Without answers to complex legal questions, the use of AI in surgery will be severely limited. According to Price et al., physicians are incentivized to minimize the use of AI under current law. Normally, a physician’s actions are privileged under tort law if normal standard of care is followed [155]. However, if a physician follows AI recommendations that go against the current standard of care, even if the AI recommendation is correct, any resulting poor outcomes could lead to litigation [155]. Thus, under current law, the clinical use of AI will mostly be limited to confirming clinical decisions, greatly reducing the potential value of AI. Finally, in cases where data are stored on the cloud or in cases where data are crowd-sourced, there may be data privacy concerns [149]. Additionally, in shared data, there may be concerns about the ownership of uploaded data [149]. Thus, with each application of AI, terms must clearly delineate medicolegal terms, who owns uploaded data, and how models may be monetized.

Advertisement

10. Future implications for surgeons

Though important barriers must be addressed before AI/ML can be more broadly implemented in direct patient care, it is evident how powerful AI/ML can be in finding patterns and facilitating/directing clinical care in the future. While some surgeons may be concerned about AI replacing job opportunities in the future, AI should instead be seen as a dynamic tool for enhancing surgeons’ abilities to provide optimal patient care. AI algorithms in the near future will potentially improve the diagnosis of conditions and enhance the prediction of complications. These algorithms can consolidate vast amounts of data—more than any surgeon could reasonably cognitively process—and thus may be ideal in helping surgeons identify patients at risk for certain complications, ultimately making surgeries safer for patients [156]. This is addition to many other benefits appreciated across immediately adjacent clinical and nonclinical fields, applications, and implementations. When properly leveraged, the use of AI will help decrease cognitive load and allow surgeons to focus more on other aspects of patient care.

Artificial intelligence may enhance many aspects of patient care in the future, but machines cannot replace the human aspect of medicine. Though AI will allow providers in the future to parse massive data sets and find patterns that would previously have been missed, AI does not diminish the need for human-human interaction and the surgeon-patient relationship [157]. The surgeon-patient relationship is still an essential aspect of care and is still vital in gaining the trust of the patient. Given the complex nature of ML algorithms, patients may not be willing to trust recommendations from AI, especially in the near future. Thus, surgeons will remain instrumental in the care of patients and can serve as advocates for the many uses of AI in the future. Though surgeons in the future may utilize AI to enhance diagnosis, medical management, and surgical procedures, it is critical that they do not solely rely on these algorithms. Reliance solely on AI may lead to the “deskilling” of providers and may lead to missing mistakes made by these algorithms [158].

Finally, while AI/ML may help enhance many other aspects and facets of patient care, it is critically important to remember that it is most likely surgeons will be ultimately responsible for interpreting patterns identified by AI and determining the role of AI in surgery. Therefore, it is vital for surgeons to work with data scientists, machine learning experts, and other healthcare team members to determine how AI can be utilized for optimal patient care. AI has the potential to be a powerful tool, but it will only be as helpful as the surgeons who wield it.

11. Conclusions

Artificial intelligence and machine learning have a myriad of uses in surgery in all surgical disciplines. AI may enhance disease diagnosis, help surgeons identify patients at risk of complications, and improve the ease of minimally invasive surgery. Furthermore, AI shows promise in improving surgical education and may eventually be used in fully autonomous surgery and nanosurgery. Despite its potential uses, AI is currently limited by large data requirements, concerns about the integrity of data input, and ethical and legal considerations. Surgeons should work to address these issues and take an active role in determining the best ways to implement AI to optimize patient care.

References

  1. 1. Ibrahim A et al. Artificial intelligence in digital breast pathology: Techniques and applications. The Breast. 2020;49:267-273
  2. 2. Hua TK. A short review on machine learning. Authorea. 2022
  3. 3. Lonsdale H, Jalali A, Gálvez JA, Ahumada LM, Simpao AF. Artificial intelligence in anesthesiology: Hype, hope, and hurdles. Anesthesia and Analgesia. 2020;130:1111-1113
  4. 4. Hashimoto DA, Rosman G, Rus D, Meireles OR. Artificial intelligence in surgery: Promises and perils. Annals of Surgery. 2018;268(1):70-76
  5. 5. Jarrahi MH. In the age of the smart artificial intelligence: AI’s dual capacities for automating and informating work. Business Information Review. 2019;36(4):178-187
  6. 6. Agrawal A, Gans JS, Goldfarb A. Artificial intelligence: The ambiguous labor market impact of automating prediction. Journal of Economic Perspectives. 2019;33(2):31-50
  7. 7. Guszcza J, Lewis H, Evans-Greenwood P. Cognitive collaboration: Why humans and computers think better together. Deloitte Review. 2017;20:8-29
  8. 8. Brynjolfsson E, McAfee A. Winning the race with ever-smarter machines. MIT Sloan Management Review. 2012;53(2):53
  9. 9. Jarrahi MH. Artificial intelligence and the future of work: Human-AI symbiosis in organizational decision making. Business Horizons. 2018;61(4):577-586
  10. 10. Goldfarb A, Lindsay J. Artificial Intelligence in War: Human Judgment as an Organizational Strength and a Strategic Liability. Brookings Institution; 2020
  11. 11. De Luca G. The development of machine intelligence in a computational universe. Technology in Society. 2021;65:101553
  12. 12. Gumbs AA et al. Artificial intelligence surgery: How do we get to autonomous actions in surgery? Sensors. 2021;21(16):5526
  13. 13. Russel SJ, Norvig P. Artificial Intelligence a Modern Approach. Upper Saddle River, New Jersey, USA: Pearson Education Inc.; 2010
  14. 14. LeCun Y, Bengio Y, Hinton G. Deep learning. Nature. 2015;521(7553):436-444
  15. 15. Loftus TJ et al. Artificial intelligence and surgical decision-making. JAMA Surgery. 2020;155(2):148-158
  16. 16. Pereira SS, Guimarães M, Monteiro MP. Towards precision medicine in bariatric surgery prescription. Reviews in Endocrine and Metabolic Disorders. 2023:1-17
  17. 17. Topart P. Obesity surgery: Which procedure should we choose and why? Journal of Visceral Surgery. 2023;160(2):S30-S37
  18. 18. Bihorac A et al. MySurgeryRisk: Development and validation of a machine-learning risk algorithm for major complications and death after surgery. Annals of Surgery. 2019;269(4):652
  19. 19. Zhou C, Wang Y, Xue Q, Yang J, Zhu Y. Predicting difficult airway intubation in thyroid surgery using multiple machine learning and deep learning algorithms. Frontiers in Public Health. 2022;10:937471
  20. 20. Wilson B et al. Predicting spinal surgery candidacy from imaging data using machine learning. Neurosurgery. 2021;89(1):116-121
  21. 21. Bellini V, Valente M, Del Rio P, Bignami E. Artificial intelligence in thoracic surgery: A narrative review. Journal of Thoracic Disease. 2021;13(12):6963
  22. 22. Malani SN IV, Shrivastava D, Raka MS, Raka MS IV. A comprehensive review of the role of artificial intelligence in obstetrics and gynecology. Cureus. 2023;15(2):e34891
  23. 23. Shoham G, Berl A, Shir-az O, Shabo S, Shalom A. Predicting Mohs surgery complexity by applying machine learning to patient demographics and tumor characteristics. Experimental Dermatology. 2022;31(7):1029-1035
  24. 24. Bian Y et al. Artificial intelligence to predict lymph node metastasis at CT in pancreatic ductal adenocarcinoma. Radiology. 2023;306(1):160-169
  25. 25. Etienne H et al. Artificial intelligence in thoracic surgery: Past, present, perspective and limits. European Respiratory Review. 2020;29(157):200010
  26. 26. Fairchild AT et al. A deep learning-based computer aided detection (CAD) system for difficult-to-detect brain metastases. International Journal of Radiation Oncology* Biology* Physics. 2023;115(3):779-793
  27. 27. Martin RK, Ley C, Pareek A, Groll A, Tischer T, Seil R. Artificial intelligence and machine learning: An introduction for orthopaedic surgeons. Knee Surgery, Sports Traumatology, Arthroscopy. 2022;30:1-4
  28. 28. Savage N. How AI is improving cancer diagnostics. Nature. 2020;579(7800):S14-S14
  29. 29. Cui X et al. Performance of a deep learning-based lung nodule detection system as an alternative reader in a Chinese lung cancer screening program. European Journal of Radiology. 2022;146:110068
  30. 30. Vigueras-Guillén JP, van Rooij J, Engel A, Lemij HG, van Vliet LJ, Vermeer KA. Deep learning for assessing the corneal endothelium from specular microscopy images up to 1 year after ultrathin-DSAEK surgery. Translational Vision Science & Technology. 2020;9(2):49-49
  31. 31. Koçak B, Durmaz EŞ, Ateş E, Kılıçkesmez Ö. Radiomics with artificial intelligence: A practical guide for beginners. Diagnostic and Interventional Radiology. 2019;25(6):485
  32. 32. Shur JD et al. Radiomics in oncology: A practical guide. Radiographics. 2021;41(6):1717-1732
  33. 33. Zhang Y et al. Risk factors for axillary lymph node metastases in clinical stage T1-2N0M0 breast cancer patients. Medicine. 2019;98(40):e17481
  34. 34. Giuliano AE et al. Effect of axillary dissection vs no axillary dissection on 10-year overall survival among women with invasive breast cancer and sentinel node metastasis: The ACOSOG Z0011 (Alliance) randomized clinical trial. JAMA. 2017;318(10):918-926
  35. 35. Wilke LG et al. Surgical complications associated with sentinel lymph node biopsy: Results from a prospective international cooperative group trial. Annals of Surgical Oncology. 2006;13:491-500
  36. 36. Wang Y et al. Improved false negative rate of axillary status using sentinel lymph node biopsy and ultrasound-suspicious lymph node sampling in patients with early breast cancer. BMC Cancer. 2015;15(1):1-7
  37. 37. Yu Y et al. Magnetic resonance imaging radiomics predicts preoperative axillary lymph node metastasis to support surgical decisions and is associated with tumor microenvironment in invasive breast cancer: A machine learning, multicenter study. eBioMedicine. 2021;69:103460
  38. 38. Chang F-C et al. Magnetic resonance radiomics features and prognosticators in different molecular subtypes of pediatric Medulloblastoma. PLoS One. 2021;16(7):e0255500
  39. 39. Hsich EM et al. Variables of importance in the scientific registry of transplant recipients database predictive of heart transplant waitlist mortality. American Journal of Transplantation. 2019;19(7):2067-2076
  40. 40. Giglio MC et al. Machine learning improves the accuracy of graft weight prediction in living donor liver transplantation. Liver Transplantation. 2022;29(2):172-183
  41. 41. Guijo-Rubio D, Gutiérrez PA, Hervás-Martínez C. Machine learning methods in organ transplantation. Current Opinion in Organ Transplantation. 2020;25(4):399-405
  42. 42. Hatib F et al. Machine-learning algorithm to predict hypotension based on high-fidelity arterial pressure waveform analysis. Anesthesiology. 2018;129(4):663-674
  43. 43. Lundberg SM et al. Explainable machine-learning predictions for the prevention of hypoxaemia during surgery. Nature Biomedical Engineering. 2018;2(10):749-760
  44. 44. Lee SM et al. Development and validation of a prediction model for need for massive transfusion during surgery using intraoperative hemodynamic monitoring data. JAMA Network Open. 2022;5(12):e2246637-e2246637
  45. 45. Loftus TJ, Upchurch GR, Bihorac A. Use of artificial intelligence to represent emergent systems and augment surgical decision-making. JAMA Surgery. 2019;154(9):791-792
  46. 46. Yang Q, Steinfeld A, Zimmerman J. Unremarkable AI: Fitting intelligent decision support into critical, clinical decision-making processes. In: Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems. Scotland, UK: Association for Computing Machinery Special Interest Group on Computer-Human Interaction in Glasgow; 2019. pp. 1-11
  47. 47. Stawicki S. Fundamentals of Patient Safety in Medicine and Surgery. New Delhi, India: Wolters Kluwer (India) Pvt. Ltd; 2015
  48. 48. Laessig MA et al. Potential implementations of blockchain technology in patient safety: A high-level overview. In: Blockchain in Healthcare: From Disruption to Integration. Cham, Switzerland: Springer; 2023. pp. 117-140
  49. 49. Pappada SM et al. Evaluation of a model for glycemic prediction in critically ill surgical patients. PLoS One. 2013;8(7):e69475
  50. 50. Barth N, Seamon MJ. Situation awareness in patient safety. Fundamentals of Patient Safety in Medicine and Surgery. 2015;15:105
  51. 51. Komorowski M, Celi LA, Badawi O, Gordon AC, Faisal AA. The artificial intelligence clinician learns optimal treatment strategies for sepsis in intensive care. Nature Medicine. 2018;24(11):1716-1720
  52. 52. Ehrmann DE et al. Evaluating and reducing cognitive load should be a priority for machine learning in healthcare. Nature Medicine. 2022;28(7):1331-1333
  53. 53. De Melo CM, Kim K, Norouzi N, Bruder G, Welch G. Reducing cognitive load and improving warfighter problem solving with intelligent virtual assistants. Frontiers in Psychology. 2020;11:554706
  54. 54. Lee LW, Dabirian A, McCarthy IP, Kietzmann J. Making sense of text: Artificial intelligence-enabled content analysis. European Journal of Marketing. 2020;54:615-644
  55. 55. Ofli F et al. Combining human computing and machine learning to make sense of big (aerial) data for disaster response. Big Data. 2016;4(1):47-59
  56. 56. Özdemir V, Hekim N. Birth of industry 5.0: Making sense of big data with artificial intelligence, “the internet of things” and next-generation technology policy. Omics: A Journal of Integrative Biology. 2018;22(1):65-76
  57. 57. Vyborny CJ, Giger ML. Computer vision and artificial intelligence in mammography. AJR. American Journal of Roentgenology. 1994;162(3):699-708
  58. 58. Li X, Shi Y. Computer vision imaging based on artificial intelligence. In: 2018 International Conference on Virtual Reality and Intelligent Systems (ICVRIS). Hunan, China: IEEE; 2018. pp. 22-25
  59. 59. Voulodimos A, Doulamis N, Doulamis A, Protopapadakis E. Deep learning for computer vision: A brief review. Computational Intelligence and Neuroscience. 2018;2018:7068349
  60. 60. Hollon TC et al. Near real-time intraoperative brain tumor diagnosis using stimulated Raman histology and deep neural networks. Nature Medicine. 2020;26(1):52-58
  61. 61. Orringer DA et al. Rapid intraoperative histology of unprocessed surgical specimens via fibre-laser-based stimulated Raman scattering microscopy. Nature Biomedical Engineering. 2017;1(2):0027
  62. 62. Mahe E et al. Intraoperative pathology consultation: Error, cause and impact. Canadian Journal of Surgery. 2013;56(3):E13
  63. 63. Chen SB, Novoa RA. Artificial intelligence for dermatopathology: Current trends and the road ahead. Seminars in Diagnostic Pathology. Elsevier. 2022;39(4):298-304
  64. 64. Bari H, Wadhwani S, Dasari BV. Role of artificial intelligence in hepatobiliary and pancreatic surgery. World Journal of Gastrointestinal Surgery. 2021;13(1):7
  65. 65. Robertson S, Azizpour H, Smith K, Hartman J. Digital image analysis in breast pathology—From image processing techniques to artificial intelligence. Translational Research. 2018;194:19-35
  66. 66. Golden JA. Deep learning algorithms for detection of lymph node metastases from breast cancer: Helping artificial intelligence be seen. JAMA. 2017;318(22):2184-2186
  67. 67. Hotsinpiller W, Everett A, Richman J, Parker C, Boggs D. Rates of margin positive resection with breast conservation for invasive breast cancer using the NCDB. The Breast. 2021;60:86-89
  68. 68. Choti MA et al. Trends in long-term survival following liver resection for hepatic colorectal metastases. Annals of Surgery. 2002;235(6):759
  69. 69. Daoust F et al. Handheld macroscopic Raman spectroscopy imaging instrument for machine-learning-based molecular tissue margins characterization. Journal of Biomedical Optics. 2021;26(2):022911
  70. 70. Kumar S, Singhal P, Krovi VN. Computer-vision-based decision support in surgical robotics. IEEE Design & Test. 2015;32(5):89-97
  71. 71. Xia W, Chen E, Pautler S, Peters T. Laparoscopic image enhancement based on distributed retinex optimization with refined information fusion. Neurocomputing. 2022;483:460-473
  72. 72. Ruiz-Fernández D, Galiana-Merino JJ, de Ramón-Fernández A, Vives-Boix V, Enríquez-Buendía P. A dcp-based method for improving laparoscopic images. Journal of Medical Systems. 2020;44:1-9
  73. 73. Pesce A, Portale TR, Minutolo V, Scilletta R, Destri GL, Puleo S. Bile duct injury during laparoscopic cholecystectomy without intraoperative cholangiography: A retrospective study on 1,100 selected patients. Digestive Surgery. 2012;29(4):310-314
  74. 74. Karvonen J, Gullichsen R, Laine S, Salminen P, Grönroos JM. Bile duct injuries during laparoscopic cholecystectomy: Primary and long-term results from a single institution. Surgical Endoscopy. 2007;21:1069-1073
  75. 75. Moghul F, Kashyap S. Bile Duct Injury. Treasure Island (FL): StatPearls Publishing; 2022
  76. 76. Owen D, Grammatikopoulou M, Luengo I, Stoyanov D. Automated identification of critical structures in laparoscopic cholecystectomy. International Journal of Computer Assisted Radiology and Surgery. 2022;17(12):2173-2181
  77. 77. Qian L, Wu JY, DiMaio SP, Navab N, Kazanzides P. A review of augmented reality in robotic-assisted surgery. IEEE Transactions on Medical Robotics and Bionics. 2019;2(1):1-16
  78. 78. Gorpas D et al. Autofluorescence lifetime augmented reality as a means for real-time robotic surgery guidance in human patients. Scientific Reports. 2019;9(1):1187
  79. 79. Dibble CF, Molina CA. Device profile of the XVision-spine (XVS) augmented-reality surgical navigation system: Overview of its safety and efficacy. Expert Review of Medical Devices. 2021;18(1):1-8
  80. 80. Peh S et al. Accuracy of augmented reality surgical navigation for minimally invasive pedicle screw insertion in the thoracic and lumbar spine with a new tracking device. The Spine Journal. 2020;20(4):629-637
  81. 81. Soler L et al. Virtual reality and augmented reality in digestive surgery. In: Third IEEE and ACM International Symposium on Mixed and Augmented Reality. Arlington, VA, USA: IEEE; 2004. pp. 278-279
  82. 82. Arjomandi Rad A et al. Extended, virtual and augmented reality in thoracic surgery: A systematic review. Interactive Cardio Vascular and Thoracic Surgery. 2022;34(2):201-211
  83. 83. Peden RG, Mercer R, Tatham AJ. The use of head-mounted display eyeglasses for teaching surgical skills: A prospective randomised study. International Journal of Surgery. 2016;34:169-173
  84. 84. Barteit S, Lanfermann L, Bärnighausen T, Neuhann F, Beiersmann C. Augmented, mixed, and virtual reality-based head-mounted devices for medical education: Systematic review. JMIR Serious Games. 2021;9(3):e29080
  85. 85. Gao Y, Kruger U, Intes X, Schwaitzberg S, De S. A machine learning approach to predict surgical learning curves. Surgery. 2020;167(2):321-327
  86. 86. Cheng H et al. Prolonged operative duration is associated with complications: A systematic review and meta-analysis. Journal of Surgical Research. 2018;229:134-144
  87. 87. Darzi A, Smith S, Taffinder N. Assessing operative skill: Needs to become more objective. BMJ. 1999;318:887-888
  88. 88. Hu Y-Y et al. Complementing operating room teaching with video-based coaching. JAMA Surgery. 2017;152(4):318-325
  89. 89. Hashimoto DA et al. Computer vision analysis of intraoperative video: Automated recognition of operative steps in laparoscopic sleeve gastrectomy. Annals of Surgery. 2019;270(3):414
  90. 90. Garrow CR et al. Machine learning for surgical phase recognition: A systematic review. Annals of Surgery. 2021;273(4):684-693
  91. 91. Azari DP et al. Modeling surgical technical skill using expert assessment for automated computer rating. Annals of Surgery. 2019;269(3):574
  92. 92. Loftus TJ et al. Association of postoperative undertriage to hospital wards with mortality and morbidity. JAMA Network Open. 2021;4(11):e2131669-e2131669
  93. 93. Ren Y et al. Performance of a machine learning algorithm using electronic health record data to predict postoperative complications and report on a mobile platform. JAMA Network Open. 2022;5(5):e2211973-e2211973
  94. 94. Shahian DM et al. Predictors of long-term survival after coronary artery bypass grafting surgery: Results from the Society of Thoracic Surgeons adult cardiac surgery database (the ASCERT study). Circulation. 2012;125(12):1491-1500
  95. 95. Forte JC et al. Comparison of machine learning models including preoperative, intraoperative, and postoperative data and mortality after cardiac surgery. JAMA Network Open. 2022;5(10):e2237970-e2237970
  96. 96. Azadfard M, Huecker MR, Leaming JM. Opioid Addiction. Treasure Island (FL): StatPearls Publishing; 2022
  97. 97. Kunze KN, Polce EM, Alter TD, Nho SJ. Machine learning algorithms predict prolonged opioid use in opioid-naïve primary hip arthroscopy patients. JAAOS Global Research & Reviews. 2021;5(5):e21
  98. 98. Lötsch J et al. Machine-learning-derived classifier predicts absence of persistent pain after breast cancer surgery with high accuracy. Breast Cancer Research and Treatment. 2018;171:399-411
  99. 99. Anderson AB, Grazal CF, Balazs GC, Potter BK, Dickens JF, Forsberg JA. Can predictive modeling tools identify patients at high risk of prolonged opioid use after ACL reconstruction? Clinical Orthopaedics and Related Research. 2020;478(7):1603
  100. 100. Gabriel RA et al. Machine learning approach to predicting persistent opioid use following lower extremity joint arthroplasty. Regional Anesthesia & Pain Medicine. 2022;47(5):313-319
  101. 101. Kokkotis C et al. Identifying gait-related functional outcomes in post-knee surgery patients using machine learning: A systematic review. International Journal of Environmental Research and Public Health. 2022;20(1):448
  102. 102. Jones G et al. Gait comparison of unicompartmental and total knee arthroplasties with healthy controls. The Bone & Joint Journal. 2016;98(10_Supple_B):16-21
  103. 103. Martins M, Santos C, Costa L, Frizera A. Feature reduction with PCA/KPCA for gait classification with different assistive devices. International Journal of Intelligent Computing and Cybernetics. 2015;8(4):363-382
  104. 104. Kokkotis C et al. Leveraging explainable machine learning to identify gait biomechanical parameters associated with anterior cruciate ligament injury. Scientific Reports. 2022;12(1):6647
  105. 105. Cao Z, Simon T, Wei S-E, Sheikh Y. Realtime multi-person 2d pose estimation using part affinity fields. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Honolulu, HI, USA: IEEE; 2017. pp. 7291-7299
  106. 106. Chen B et al. Computer vision and machine learning-based gait pattern recognition for flat fall prediction. Sensors. 2022;22(20):7960
  107. 107. Kwon J, Lee Y, Lee J. Comparative study of markerless vision-based gait analyses for person Re-identification. Sensors. 2021;21(24):8208
  108. 108. Ng K-D, Mehdizadeh S, Iaboni A, Mansfield A, Flint A, Taati B. Measuring gait variables using computer vision to assess mobility and fall risk in older adults with dementia. IEEE Journal of Translational Engineering in Health and Medicine. 2020;8:1-9
  109. 109. Yang G-Z et al. Medical robotics—Regulatory, ethical, and legal considerations for increasing levels of autonomy. Science Robotics. 2017;2:eaam8638
  110. 110. Saeidi H et al. Autonomous robotic laparoscopic surgery for intestinal anastomosis. Science Robotics. 2022;7(62):eabj2908
  111. 111. Shademan A, Decker RS, Opfermann JD, Leonard S, Krieger A, Kim PC. Supervised autonomous robotic soft tissue surgery. Science Translational Medicine. 2016;8(337):337ra64
  112. 112. Hu D, Gong Y, Seibel EJ, Sekhar LN, Hannaford B. Semi-autonomous image-guided brain tumour resection using an integrated robotic system: A bench-top study. The International Journal of Medical Robotics and Computer Assisted Surgery. 2018;14(1):e1872
  113. 113. Tapia J, Knoop E, Mutný M, Otaduy MA, Bächer M. Makesense: Automated sensor design for proprioceptive soft robots. Soft Robotics. 2020;7(3):332-345
  114. 114. Hua J, Zeng L, Li G, Ju Z. Learning for a robot: Deep reinforcement learning, imitation learning, transfer learning. Sensors. 2021;21(4):1278
  115. 115. Rivera C, Popek KM, Ashcraft C, Staley EW, Katyal KD, Paulhamus BL. Learning generalizable behaviors from demonstration. Frontiers in Neurorobotics. 2022;16:932652
  116. 116. Kumar A, Hong J, Singh A, Levine S. When should we prefer offline reinforcement learning over behavioral cloning? arXiv. Vol. abs/2204.05618. 2022
  117. 117. Silver D et al. A general reinforcement learning algorithm that masters chess, shogi, and go through self-play. Science. 2018;362(6419):1140-1144
  118. 118. He Z et al. Zero-shot skill composition and simulation-to-real transfer by learning task representations. arXiv. Vol. abs/1810.02422. 2018
  119. 119. Tan A et al. Nanotechnology and regenerative therapeutics in plastic surgery: The next frontier. Journal of Plastic, Reconstructive & Aesthetic Surgery. 2016;69(1):1-13
  120. 120. Roduner E. Size matters: Why nanomaterials are different. Chemical Society Reviews. 2006;35(7):583-592
  121. 121. Hofferberth SC, Grinstaff MW, Colson YL. Nanotechnology applications in thoracic surgery. European Journal of Cardio-Thoracic Surgery. 2016;50(1):6-16
  122. 122. Krůpa P, Řehák S, Diaz-Garcia D, Filip S. Nanotechnology–new trends in the treatment of brain tumours. Acta Medica. 2015;57(4):142-150
  123. 123. Zhang G, Zeng X, Li P. Nanomaterials in cancer-therapy drug delivery system. Journal of Biomedical Nanotechnology. 2013;9(5):741-750
  124. 124. Xu Y et al. Application of nanotechnology in the diagnosis and treatment of bladder cancer. Journal of Nanobiotechnology. 2021;19:1-18
  125. 125. Khawaja AM. The legacy of nanotechnology: Revolution and prospects in neurosurgery. International Journal of Surgery. 2011;9(8):608-614
  126. 126. Adir O et al. Integrating artificial intelligence and nanotechnology for precision cancer medicine. Advanced Materials. 2020;32(13):1901989
  127. 127. Wang J et al. Hepatocellular carcinoma growth retardation and PD-1 blockade therapy potentiation with synthetic high-density lipoprotein. Nano Letters. 2019;19(8):5266-5276
  128. 128. Wang K et al. Combination of ablation and immunotherapy for hepatocellular carcinoma: Where we are and where to go. Frontiers in Immunology. 2021;12:792781
  129. 129. Binnig G, Quate CF, Gerber C. Atomic force microscope. Physical Review Letters. 1986;56(9):930
  130. 130. Song B, Yang R, Xi N, Patterson KC, Qu C, Lai KWC. Cellular-level surgery using nano robots. Journal of Laboratory Automation. 2012;17(6):425-434
  131. 131. Li G, Xi N, Wang DH. In situ sensing and manipulation of molecules in biological samples using a nanorobotic system. Nanomedicine: Nanotechnology, Biology and Medicine. 2005;1(1):31-40
  132. 132. Wen CK, Goh MC. AFM nanodissection reveals internal structural details of single collagen fibrils. Nano Letters. 2004;4(1):129-132
  133. 133. Yang R et al. Cellular level robotic surgery: Nanodissection of intermediate filaments in live keratinocytes. Nanomedicine: Nanotechnology, Biology and Medicine. 2015;11(1):137-145
  134. 134. Brodie A, Vasdev N. The future of robotic surgery. The Annals of The Royal College of Surgeons of England. 2018;100(Supplement 7):4-13
  135. 135. Leong TG, Randall CL, Benson BR, Bassik N, Stern GM, Gracias DH. Tetherless thermobiochemically actuated microgrippers. Proceedings of the National Academy of Sciences. 2009;106(3):703-708
  136. 136. Shapiro AJ et al. International trial of the Edmonton protocol for islet transplantation. New England Journal of Medicine. 2006;355(13):1318-1330
  137. 137. Im B-H et al. The effects of 8-arm-PEG-catechol/heparin shielding system and immunosuppressive drug, FK506 on the survival of intraportally allotransplanted islets. Biomaterials. 2013;34(8):2098-2106
  138. 138. Park H et al. Polymeric nano-shielded islets with heparin-polyethylene glycol in a non-human primate model. Biomaterials. 2018;171:164-177
  139. 139. Izadi Z et al. Tolerance induction by surface immobilization of Jagged-1 for immunoprotection of pancreatic islets. Biomaterials. 2018;182:191-201
  140. 140. Sun H et al. Nanotechnology-enabled materials for hemostatic and anti-infection treatments in orthopedic surgery. International Journal of Nanomedicine. 2018;13:8325
  141. 141. Zabaglo M, Sharman T. Postoperative Wound Infection. Treasure Island (FL): StatPearls Publishing; 2021
  142. 142. Rai MK, Deshmukh S, Ingle A, Gade A. Silver nanoparticles: The powerful nanoweapon against multidrug-resistant bacteria. Journal of Applied Microbiology. 2012;112(5):841-852
  143. 143. Castiglioni S, Cazzaniga A, Locatelli L, Maier JA. Silver nanoparticles in orthopedic applications: New insights on their effects on osteogenic cells. Nanomaterials. 2017;7(6):124
  144. 144. Herzog F et al. Mimicking exposures to acute and lifetime concentrations of inhaled silver nanoparticles by two different in vitro approaches. Beilstein Journal of Nanotechnology. 2014;5(1):1357-1370
  145. 145. Khalsa RK, Khashkhusha A, Zaidi S, Harky A, Bashir M. Artificial intelligence and cardiac surgery during COVID-19 era. Journal of Cardiac Surgery. 2021;36(5):1729-1733
  146. 146. Rampat R et al. Artificial intelligence in cornea, refractive surgery, and cataract: Basic principles, clinical applications, and future directions. Asia-Pacific journal of ophthalmology (Philadelphia, Pa.). 2021;10(3):268
  147. 147. Murthy VH, Krumholz HM, Gross CP. Participation in cancer clinical trials: Race-, sex-, and age-based disparities. JAMA. 2004;291(22):2720-2726
  148. 148. Crawford K, Calo R. There is a blind spot in AI research. Nature. 2016;538(7625):311-313
  149. 149. Wang DD et al. 3D printing, computational modeling, and artificial intelligence for structural heart disease. Cardiovascular Imaging. 2021;14(1):41-60
  150. 150. Murphy D, Saleh D. Artificial intelligence in plastic surgery: What is it? Where are we now? What is on the horizon? The Annals of The Royal College of Surgeons of England. 2020;102(8):577-580
  151. 151. Finlayson SG, Bowers JD, Ito J, Zittrain JL, Beam AL, Kohane IS. Adversarial attacks on medical machine learning. Science. 2019;363(6433):1287-1289
  152. 152. Biggio B, Roli F. Wild patterns: Ten years after the rise of adversarial machine learning. In: Proceedings of the 2018 ACM SIGSAC Conference on Computer and Communications Security. New York, NY, USA: Association for Computing Machinery; 2018. pp. 2154-2156
  153. 153. Cutillo CM et al. Machine intelligence in healthcare—Perspectives on trustworthiness, explainability, usability, and transparency. NPJ Digital Medicine. 2020;3(1):47
  154. 154. Pecqueux M et al. The use and future perspective of artificial intelligence—A survey among German surgeons. Frontiers in Public Health. 2022;10:982335
  155. 155. Price WN, Gerke S, Cohen IG. Potential liability for physicians using artificial intelligence. JAMA. 2019;322(18):1765-1766
  156. 156. Bahl M, Barzilay R, Yedidia AB, Locascio NJ, Yu L, Lehman CD. High-risk breast lesions: A machine learning model to predict pathologic upgrade and reduce unnecessary surgical excision. Radiology. 2018;286(3):810-818
  157. 157. Hashimoto DA, Witkowski E, Gao L, Meireles O, Rosman G. Artificial intelligence in anesthesiology: Current techniques, clinical applications, and limitations. Anesthesiology. 2020;132(2):379-394
  158. 158. Myers TG, Ramkumar PN, Ricciardi BF, Urish KL, Kipper J, Ketonis C. Artificial intelligence and orthopaedics: An introduction for clinicians. The Journal of bone and joint surgery American. 2020;102(9):830

Written By

Ryan Yimeng Lee, Alyssa Imperatore Ziehm, Lauryn Ullrich and Stanislaw P. Stawicki

Submitted: 01 June 2023 Reviewed: 27 July 2023 Published: 06 September 2023