Open access peer-reviewed chapter

Human-Machine Collaboration in AI-Assisted Surgery: Balancing Autonomy and Expertise

Written By

Gabriel Szydlo Shein, Ronit Brodie and Yoav Mintz

Submitted: 24 March 2023 Reviewed: 06 April 2023 Published: 28 April 2023

DOI: 10.5772/intechopen.111556

Chapter metrics overview

151 Chapter Downloads

View Full Metrics

Abstract

Artificial Intelligence is already being actively utilized in some fields of medicine. Its entrance into the surgical realm is inevitable, sure to become an integral tool for surgeons in their operating rooms and in providing perioperative care. As the technology matures and AI-collaborative systems become more widely available to assist in surgery, the need to find a balance between machine autonomy and surgeon expertise will become clearer. This chapter reviews the factors that need to be held in consideration to find this equilibrium. It examines the question from the perspective of the surgeon and the machine individually, their current and future collaborations, as well as the obstacles that lie ahead.

Keywords

  • surgery
  • artificial intelligence
  • computer assisted surgery
  • robotic surgery
  • surgical technology

1. Introduction

Artificial Intelligence (AI) is an exponentially growing field that has already impacted many industries. Although the foundations of AI were laid out by Alan Turing as early as the Second World War, recent advances in machine learning and deep learning have bolstered the field, making it one of the most exciting areas of research and development in today’s technology landscape. AI focuses on developing systems to perform tasks that would normally require human intelligence, including activities such as problem solving, pattern recognition, decision making, and even creativity. In recent years, as AI popularity has increased, its impact on various elements of the medical industry have become more visible, a harbinger to the future integration of AI technology into the surgeon’s daily toolbox.

As this technology continues to mature, and integrates into surgical practice, the questions surrounding its role in the operating room will become more complex. While the primary question of “what can AI do for surgeons?” might soon have an obvious answer, it will open the door to the more nuanced inquiry of “how will surgeons adopt this technology and how can we mark the boundaries of what we should permit AI to do in the operating room?”

We herein discuss all the necessary information for the surgical community to understand the issues at hand surrounding AI, and we lay the framework to assist in making appropriate choices when it comes to balancing Human-AI collaboration in the operating room (OR). The chapter is divided into three sections: The human perspective of the collaboration, the machine side of the collaboration, and the balance between Surgeon and Machine.

Advertisement

2. Methodology

A through literature search was conducted utilizing the online databases of PubMed, Google Scholar, ResearchGate as well as relevant websites. The search terms used were “artificial intelligence,” “machine learning,” “deep learning,” “neural networks,” “computer vision,” “computer assisted surgery,” “machine automation,” “machine autonomy,” “surgery automation,” “surgery autonomy,” “robotic surgery,” “surgeon responsibility”, “surgeon psychology”, “surgical training,” “technology adoption,” and “levels of automation.” Inclusion criteria were peer reviewed articles and book chapters published in the English language from 2018 to 2023. As this chapter includes a thorough examination of current technologies, product and company websites were also included that lead to further articles. Excluded articles included those that were not published in the English Language, that were not related to the subject and that were not available in full text.

Our search initially yielded 6887 articles. After excluding articles and removing duplicates, all abstracts were screened, resulting in 60 full text articles that met our inclusion criteria.

The snowball sampling technique was utilized to identify additional relevant articles by reviewing the reference lists of the included articles. This resulted in an additional 10 articles that met our inclusion criteria.

Advertisement

3. The human perspective of the collaboration

3.1 The surgeon’s responsibility in the operating room

Surgeons are trained to make complex decisions under pressure and to act on those decisions with appropriate speed. This requires constant situation assessment and analysis, and reassessment and reanalysis [1]. When leading a multidisciplinary team, the surgeons are held responsible for their patient’s welfare, safety and wellbeing. From the very beginning of a surgeon’s professional life this personal responsibility for their patients’ outcome is instilled in them, and is constantly reinforced throughout their career [2, 3]. The American College of Surgeons describes the surgical profession as one of responsibility and leadership, where the surgeon is ultimately in charge of every aspect of the patients’ well-being, even if they are not directly involved [4, 5]. While some of these responsibilities might be obvious, others may perhaps be less obvious, as laid out in Table 1 [6].

Responsibility of the surgeon to ensure patient safety
ResponsibilityDescription
Preoperative preparationOversee proper preoperative preparation of the patient with standardized protocols. Achieving optimal preoperative preparation frequently requires consultation with other physicians from different disciplines; however, the responsibility for attaining this goal rests with the surgeon.
Informed consentObtain informed consent from the patient regarding the indication for surgery and surgical approach, with known risks.
Consultation with OR teamConsult with anesthesia and nursing teams to ensure patient safety. Oversee all appropriate components of the surgical time-out (Identification of patient, procedure, approach, etc.).
Safe and competent operationLead the surgical team in performing the operation safely and competently, mitigating the risks involved.
Ensure anesthesia type is appropriate for the patient and procedure. Including planning the optimal anesthesia and postoperative analgesic method with the anesthesia team.
Specimen labeling and managementOverseeing specimen collection, labeling, and management with completion of the pathology requisition.
Disclose operative findingsDisclose operative findings and the expected postoperative course to the patient.
Postoperative carePersonal participation and direction of the postoperative care, including the management of postoperative complications. If some aspects of the postoperative care may be best delegated to others, the surgeon must maintain an essential coordinating role.
Follow upEnsure appropriate long-term follow-up for evaluation and management of possible extenuating problems associated with or resulting from the patient’s surgical care.

Table 1.

Responsibilities of the surgeon as the treating physician.

The tremendous weight of carrying all this responsibility often creates a psychological mindset where the delegation of responsibilities becomes a difficult task that must be managed with great assiduity. Surgeons learn via their training to “trust no one”, to delegate tasks with caution, and to personally review all data [7]. This constant and obviously essential need for oversight raises the question - What does it take for surgeons to feel comfortable delegating responsibility? When do surgeons feel at ease when relinquishing part of this control? And subsequently, what does it take for the surgical profession to adopt new technologies that take part of this burden of responsibility away from the surgeon?

3.2 The surgeon as an innovator and the process of adopting new technology

Although surgical training is based on apprenticeship, where the student learns from the master and replicates the master’s actions exactly, the advancement of surgical capabilities has always relied heavily on the innovation and adoption of new technologies. Throughout history, the desire to help their patients has motivated surgeons worldwide to be creative in finding new solutions to their problems [8]. The evolution and adoption of change within the actual surgical practice, however, is rather complicated. Some surgeons are constantly innovating by customizing therapies and procedures to meet the uniqueness of each patient, while most continue to follow the path that was laid out by their mentors, often reluctant to adopt new technologies. As such, the integration of novel technologies or procedures into a surgeon’s daily practice is influenced by many factors, including the possible benefit the innovation provides to the patient, the patient’s demand for it, the learning curve required for skill acquisition by the surgeon, and the amount of disruption it would create within their practice [9]. Take for example, laparoscopic cholecystectomy; it took only four years from its introduction, to become the gold standard for gallbladder removal, as this procedure had obvious and very tangible benefits for the patients compared to open cholecystectomy, and the amount of disruption to the surgical practice was low. In contrast, laparoscopic simple nephrectomy attained only a mere 20% acceptance rate by surgeons thirteen years after its introduction – most likely due to the lack of perceived benefit of changing the standard of care by the surgeons [10]. The question then arises, how does one promote and move forward a new concept so that it can be adopted?

The process by which a cohort adopts a new concept (idea, technology, procedure, etc.,) can be studied and understood with the Technology Adoption Curve (TAC). TAC is a sociological model that divides individuals into five types of people with different desires and demands, and explains what it takes for each of these groups to adopt an innovation. These five groups are the innovators, the early adopters, the early majority, the late majority, and the laggards (Figure 1) [11].

Figure 1.

Technology adoption curve. Bell-curve represents the variation of adoption, and S-curve represent the accumulated adoption over time.

The TAC model, used to describe adoption in the general population can be extrapolated and applied to the adoption of technology by surgeons [12].

Innovator surgeons are enthusiastic about new technologies and are willing to take the risk of failure. They are willing to test a new procedure even if it is in experimental stages. Early adopters are the trendsetters, they are also comfortable with risk, but they want to form a solid opinion of the technology before they vocally support it. These surgeons are comfortable trying a novel procedure that has enough published literature to be regarded as safe.

Surgeons in the early majority are interested in innovation but want definitive proof of effectiveness. The benefits of a procedure are more important to them than the novelty. The late majority are averse to risk, as such, they need to be convinced that the new procedure is worth their time. While Laggards are skeptical and wary of change, making them reluctant to change, preferring to continue with what is familiar to them.

In an effort to better relate the model to evidence-based practices like surgery, Barkun et al. proposed some adaptations, allowing for critical appraisal and assessment of the technology. In their model every stage would require peer review, thereby promoting a more scientific approach to the application of new technology in surgery (Table 2) [13].

Technology adoption curveStages of surgical innovation
InnovatorsDevelopment
Early AdoptersExploration
Early MajorityAssessment
Late MajorityLong Term Implementation
Laggards

Table 2.

Stages of surgical innovation according to Barkun et al. and how they compare to the TAC model.

On average, for a new concept to be considered adopted, 20% of people must have already begun to use the technology [9], in other words, some but not all people in the early majority group of the TAC. For this to happen with AI in the OR, the benefit of the technology must be proven beyond the proof-of-concept stage. Once the technology has been proven to be safe and beneficial then it will be easier to convince more individuals to try it, thereby promoting wider spread acceptance, adoption and eventually integration into daily practice.

Advertisement

4. The machine side of the collaboration

4.1 The basics of artificial intelligence

Artificial intelligence (AI) is defined as the simulation of human intelligence in machines programmed to think and learn like humans. The aim of AI is to create machines with the ability to perceive their environment, reason with it and act in such a way that would normally require human intelligence or to process data whose scale exceeds what humans can analyze [14]. In other words, to create systems that have a certain degree of autonomy [15]. Within the framework of autonomy in AI there is a hierarchy, comprised of three main tiers:

4.1.1 Artificial narrow intelligence

Systems designed to perform a specific task or solve a specific problem. As such, they have a narrow range of parameters allowing them to simulate human behaviors in specific contexts such as face or speech recognition and processing, voice assistance, or autonomous driving. They are “intelligent” only within the specific task they are programmed to do.

4.1.2 Artificial general intelligence

Systems designed to perform any intellectual task that a human can [16]. Apart from mimicking human intelligence, these systems have the ability to learn and adapt. Additionally, they can think, learn, understand, and act in a way that is indistinguishable from that of a human being in any situation.

4.1.3 Artificial superintelligence

A system designed to surpass human intelligence in every aspect with the ability to improve its own capabilities rapidly [17]. This system is designed to have consciousness and be sentient [18], surpassing humans in every way: science, analysis, medicine, sports, as well as emotions and relationships.

While the tiers of AI are each fascinating in their own way, currently the only type of AI that exists is Artificial Narrow Intelligence. The remaining tiers are merely theoretical and philosophical concepts, as such have yet to be achieved, and are beyond the scope of this chapter.

To further understand how AI works, it is important to discuss the concepts of Machine Learning, Artificial Neural Networks, and Deep Learning. These terms are used to describe the techniques that organize the basis of AI systems and are important to understand how AI is achieved. These terms refer to different techniques used to train machines on data, each one building upon the prior one in order to reach more complex results [19, 20, 21].

  • Machine Learning (ML) is the process by which an AI system can automatically improve with experience; this process allows a system to learn from data without being explicitly programmed. Machine learning algorithms can analyze large amounts of information to identify patterns and make predictions or decisions based on that analysis.

  • Artificial Neural Network (ANN) is a type of machine learning algorithm based on a collection of connected units called “neurons” that loosely model neurons in the biological brain. Each connection can transmit a signal to other “neurons” which in turn receive the signal, process it and forward a new signal to other neurons connected to it. A neuron can only transmit its processed signal if it crosses a certain threshold, a process similar to the depolarization of biological neurons, hence the term neural network.

  • Deep Learning (DL) is a type of neural network that is designed to learn and make decisions based on multiple hidden layers of interconnected neurons. Deep learning algorithms are capable of learning and representing complex relationships in multiple datasets automatically (Figure 2).

Figure 2.

Deep learning example of an artificial neural network where an image is pushed through several algorithms in hidden layers. Once all layers are processed the outcome can be reached, in this case a definition of the image.

Now that the techniques that serve as the basis of AI have been clarified, it is important to understand how they are applied to create actual usable systems that can perform a task. These basic applications of AI include Natural Language Processing, Computer Vision, and Expert Systems, which leverage Machine Learning, Artificial Neural Networks and Deep Learning to solve specific problems [22, 23, 24].

  • Natural Language Processing (NLP) is the ability of AI systems to understand, process, and interpret human language.

  • Computer Vision (CV) is the ability of AI systems to interpret and understand visual data, such as images and videos.

  • Expert Systems (ES) is the ability of AI systems to emulate the decision-making capacity of a human expert.

For the purpose of simplification one can say that there are different techniques to train artificial intelligence (ML, ANN and DL) which each perform specific tasks (CV, NLP, ES) in order to solve a specific problem (Figure 3).

Figure 3.

Relationship between basic concepts of artificial intelligence.

Most of the cutting-edge AI systems available today use a variety of these algorithms in tandem to accomplish their tasks. Table 3 presents examples of technologies that use specific AI algorithms for each concept discussed above.

PrincipleAnalogyReal worldMedical world
Machine LearningTeaching a child to recognize an object by showing it pictures of the object, without telling the child what it is.Netflix, Inc. uses machine learning to recommend personalized content to each user [25].Owkin, Inc. [26], a company that uses machine learning to improve drug discovery and clinical trial design.
Neural networkThe human brain processes and interprets information from the senses to make decisions and control the body.AlphaZero™ by Deepmind, Ltd. is a chess engine which after 24 hours of training defeated world-champion chess programs [27].PhysIQ, Inc. is a company using neural networks to continuously monitor at-risk patients remotely and alert their physicians in real time [28].
Deep learningA student who starts learning basic concepts in class and continues to self-teach building-up to more complex ideas.Tesla, Inc. uses deep learning algorithms to constantly improve their cars’ self-driving system [29].AIdoc, Ltd. [30] is a company that uses deep learning algorithms for image analysis to detect and prioritize acute abnormalities in radiology.
Natural language processingA translator between people who speak different languages.Alexa™ by Amazon, Inc. is a virtual assistant that can understand, process and respond to language prompts [31].The UNITE algorithm developed at Harvard University can automatically assign ICD codes based on clinical notes without human supervision [32].
Computer VisionA child that can see any picture of a dog and know it’s a dog.Google Lens ™ [33] can process an image and offer actions depending on what it sees.DeePathology, Ltd. has created an algorithm that can autonomously detect H. Pylori in pathology slides [34].
Expert SystemsA firm that has a lawyer on retainer to answer any question at any time.AITax, Ltd. has an AI system that can automatically check and file user taxes [35].Merative™ (formerly IBM Watson Health) [36], is a clinical decision-support system for the diagnosis and treatment planning.

Table 3.

AI basic concepts with examples used in the real world and in the medical world today.

4.2 AI in medicine and its current applications

Today AI is already being utilized in medicine, and thus far, its applications have shown promising results as demonstrated by improved patient outcomes, optimized clinical workflows, and accelerated research. To date, there are 521 AI-Enabled medical devices approved by the FDA [37], with the overwhelming majority of these products being in the field of radiology used to process images for all pathologies, excluding cancer [38]. Other AI applications currently available are used in the fields of anesthesiology, cardiology, gastroenterology, general and plastic surgery, hematology, microbiology, neurology, obstetrics and gynecology, ophthalmology, orthopedics, pathology and urology. Given the broad spectrum of applications within varying fields of medicine, one understands that AI utilization is not only based on type but also on what its end goal is. Generally, AI technologies in medicine can be classified by the end goal they achieve; these include everything from screening and diagnosis, to triage and clinical trial management. Multiple applications are currently being utilized and under further development, including:

Computer-Aided Detection (CADe) technology is being developed to aid in marking/localizing regions that may reveal specific abnormalities. Its goal is to elevate the sensitivity of screening tests. Curemetrix, Inc. product cmAssist™, for example, has shown a substantial and statistically significant improvement in radiologists’ accuracy and sensitivity for detection of breast cancers that were originally missed [39].

Computer-Aided Diagnosis (CADx) is being developed to help characterize or assess diseases, disease type, severity, stage, and progression. An example of the application of this technology is GI Genius™; an Intelligent Endoscopy Module by Medtronic, plc. That can analyze a colonoscopy in real-time and estimate the possible histology of colorectal polyps [40].

Computer-Aided Triage (CADt) aids in prioritizing time sensitive patient detection. VIZ™LVO is a software by Viz.ai, Inc. that detects large vessel occlusion strokes in brain CT scans and directly alerts the relevant specialists in a median time of 5 minutes and 45 seconds, as opposed to 1 hour which is the standard of care today, significantly shortening the time to diagnosis and treatment [41].

Computer-Aided Prognosis (CAP) can provide personalized predictions about a patient’s disease progression. The EU-funded CLARIFY Project (Cancer Long Survivor Artificial Intelligence Follow-Up) is working in harnessing big data and AI to provide accurate and personalized estimates of a cancer patient’s risk for complications, including rehospitalization, cancer recurrence, treatment response, treatment toxicity, and mortality [42].

Clinical Decision Support Systems (CDSS) are being employed to aid healthcare providers in the diagnoses and treatment of patients in the most effective way possible. Babylon AI, by Babylon, Inc. for example, is a system that uses data to decide on, and provide information about the likely cause of people’s symptoms. It can then suggest possible next steps, including treatment options. The system has demonstrated its ability to diagnose as well as or even better than physicians [43].

Remote Patient Monitoring (RPM) systems are being used to monitor patients, and Virtual Rehabilitation is being developed to help patients recover from illnesses and injuries. Systems like CardiacSense Ltd. Medical Watch continuously monitor heart rate and blood pressure, process the data and update the physician in real time. This noninvasive monitoring system allows the physician to change treatment according to data that would not have been available otherwise [44].

Health Information Technology (HIT) is being employed to improve disease prevention and population health. Medial EarlySign, Ltd. mines data from electronic medical records for early detection of patients with high risk of colorectal cancer. Patients determined to have a high risk by the system are flagged and consequently scheduled for colonoscopy. This system has achieved early detection of an additional 7.5% of colorectal cancers that would otherwise have been caught in more advanced stages [45].

Clinical Trials Management Systems (CTMS) are being developed to help streamline all aspects of clinical trials including preclinical drug discovery, clinical study protocol optimization, trial participant management, as well as data collection and management. These types of systems enable researchers to improve study design by utilizing the guidance in choosing the best study design, determination of number of patients needed for each study arm, optimizing candidate selection, as well as tracking and analyzing large amounts of data. CTMS are helping researchers create stronger and more efficient trials [46].

As demonstrated by the above systems, the implementation of these types of AI has significantly and measurably improved the field of medicine. As the benefit of AI continues to be appreciated, via the understanding as to how it aids in providing better and more efficient care to patients, more professionals will begin to utilize it. With improved acceptance, the previously discussed adoption model that Barkun et al. [13] proposed will continue to shift towards long term implementation.

4.3 Potential benefits of AI in surgery

Improved patient care has historically been linked to technological advancements. Laparoscopic cameras have evolved from simple VHS quality to HD and 4 K cameras and even 3D vision with Near Infra-Red capabilities that allow the surgeon to see beyond the naked eye. Laparoscopic instruments evolved from simple straight and rigid instrumentation to articulating and flexible tools, providing a limitless range of motion. Standardization and precision-surgery have infiltrated the OR in the form of staplers for the creation of anastomosis, advanced energy tools for cutting and coagulation, and robotic assisted surgery that combines all of the above technologies together to enhance human precision. Most recently, AI has started to appear in the surgical field, albeit in the perioperative setting. These systems are helping surgeons with decision making processes both pre- and post-operatively by predicting complications and managing different aspects of patient variables [47]. Nevertheless, AI has yet to penetrate the walls of the OR.

The disparity between the advancement of AI in surgery and other fields in medicine is probably because most applicable AI technologies today are focused on vision and reporting, i.e. diagnosis and big data analysis. Surgery at its core is about both vision and action, which presents a much more complex challenge. This challenge, however, has not stopped research efforts in the field of Computer Assisted Surgery. A PubMed query revealed that in 2022, there were more than 5200 publications discussing AI in surgery, and according to The Growth Opportunities in Artificial Intelligence and Analytics in Surgery study, by 2024 the AI market for surgery will reach $225.4 million [48].

Prototypes, proof of concept and pilot studies are being developed around the world, focusing mainly on improving patient safety and refining workflows in the OR [49]. There are already published reports of AI projects in Expert Systems, Computer Vision, image classification, as well as data acquisition and management that show promising results. Studies have reported success of Computer Vision systems for recognition of surgical tools, surgical phases and anatomic landmarks.

Research on videos of laparoscopic cholecystectomy, for example, has reported success of tool recognition such as graspers, hooks and dissectors; other studies have been successful in phase recognition during laparoscopic cholecystectomy. The tested systems have demonstrated the ability of understanding and reporting when the surgeon is dissecting the cystic duct, separating the gallbladder from the hepatic bed or removing it from the body. More advanced systems have demonstrated the ability to recognize and mark the critical view of safety [50, 51].

While these research efforts are certainly demonstrating promising results, the application of AI within the operating room itself remains in its infancy.

Advertisement

5. The balance between human and machine

When trying to find an adequate balance between human and machine collaboration in the OR the subject of autonomy is a natural starting point. Surgical teams today are comprised of highly specialized professionals that need to work in perfect synchrony for surgical procedures to run smoothly. The surgeon, as the leader, must find balance between managing everything going on with a high degree of control, whilst still allowing for the autonomy and independence of each team member. Most surgeons are authoritative leaders within these teams, meaning that they retain control while still empowering the freedom of self-management where each member can be engaged, motivated and focused on their personal tasks at hand [52]. Although the surgeon is ultimately responsible, he or she will not intervene in a nurse’s needle or instrument counts, or check whether the anesthesia machine is properly working. Surgeons authorize themselves to relinquish this direct control because via a strong culture, values and guidelines they ultimately continue to provide the critical oversight and supervision for effective risk-management [52].

Besides team management, the surgeon may be liable for equipment malfunctions, therefore there is a certain underlying hesitancy in giving autonomy to machines. A 2013 systematic review of surgical technology and operating room safety failures found that up to 24% of errors within the OR are due to equipment malfunction [53]. This has not, however, stopped us from relinquishing control in certain parts of the surgery and delegating it to tools which we cannot always fully control. Advanced hemostatic devices like Ligasure™ for example, automatically adjusts and discontinues the delivery of energy based on its own calculations without any surgeon input. Similarly, the Signia™ Stapling System has Adaptive Firing™ technology that automatically and autonomously makes adjustments depending on the tissue conditions it senses [54, 55]. So, while there is hesitancy from the surgeon side for adopting new autonomous devices, if the surgeon is able to see the benefits as with the Ligasure™ and Signia™ systems, these types of tools can in fact break the barrier of more advanced machines into daily OR practice.

5.1 Machine autonomy in other fields and how they can relate to the OR

Whether we are aware of it or not, AI is already affecting the world and making our everyday lives easier. It is there every time we search for something online. It automatically recognizes us in pictures we take, it recommends new music, food or products we will like. AI helps us hear what is written and read what is spoken. It protects us from credit card fraud and helps us make smarter investments. At home it manages our thermostat and decides when and where to vacuum clean the floors.

Moreover, machines are already responsible for millions of human lives on a daily basis, albeit indirectly. The oldest and most famous example is probably the autopilot in airplanes; multiple studies have shown that in 95% of commercial flights, pilots spend less than 440 seconds manually flying the plane [56, 57]. Other examples include the automation of emergency medical service dispatchers and the automation of trains and metros, where nearly a quarter of the world’s metro systems have at least one fully automated line in operation [58, 59, 60].

The advancements of automation in settings where human lives are at stake have pushed society to further debate the autonomy versus control issue. Depending on the field, different scales have been proposed to define levels of automation and autonomy. These scales have been important as they help define the capabilities and limitations of a system’s autonomous features and establishing expectations around the operator’s behavior and responsibilities. In addition, they have helped build trust and reduce anxiety around autonomous machines, while ensuring that legal and ethical concerns are considered as technologies continue to develop.

The most prominent autonomy scales revolve around the automotive and aviation industries. The main difference between the two is that the automotive scale encompasses all the systems in a car as a single unit and the vehicle is labeled depending on its capability as a whole [61]. While in the aviation scale, each system in an airplane receives a level of automation independent from other available autonomous systems on the same plane [62]. It is important to note that the scales defining the levels of autonomy in cars, trains, and planes all have basic similarities which are adapted to each specific industry. These adaptations are dependent on the level of complexity of each industry, and the training of the average operator. All the scales, across the various industries begin at level 0 where there is no automation at all, gradually increasing to level 5 (or the maximum of 4 in trains [63]) where there is full machine autonomy without the need for human input at all.

In the field of surgery, the question of how to define the levels of autonomy in systems within the OR has already begun, and although surgical systems are not yet as advanced as other industries’, it is important to have a standardized language when referencing this subject. Yang GZ et al.’s proposal for defining the levels of autonomy for medical robotics [64] has been extremely effective in catalyzing the debate of defining the levels of autonomy in surgery. This scale is loosely based on the automotive levels of autonomy as it grades a robotic system as a whole depending on all of its capabilities.

The scale is composed of 6 levels (0–5) as follows:

  • Level 0: No autonomy. This includes currently available robots which are master-slave systems that follow the surgeon’s movements.

  • Level 1: Robot assistance. The robot provides some mechanical assistance, while the human has continuous and full control of the system.

  • Level 2: Task autonomy. The robot can autonomously perform specific tasks when asked by the surgeon.

  • Level 3: Conditional autonomy. A system suggests and then performs a number of tasks when approved by the surgeon.

  • Level 4: High autonomy. The robot can make medical decisions while being supervised by the surgeon.

  • Level 5: Full autonomy. The robot can perform an entire surgery without the need for a human surgeon.

Others have built upon this scale, using similar classification methods for surgical robot autonomy [65, 66]. Current technology in robotic surgery is only at Level 0, but when the objectives of the research projects described above are met, we might reach level 1 and 2.

As surgeons, our experience in the OR environment is more comparable to flying a sophisticated airplane than driving a car. A surgeon’s professional responsibility is similar to that of a pilot, as such the expected capabilities from autonomous systems in the OR might be similar to those in an airplane’s cockpit, where each system has their own level of autonomy, independent from other available systems. Therefore, it may be more beneficial to expand the robotic surgery scale, creating a more comprehensive autonomy scale in surgery encompassing all the types of technology used within the OR. To this end one must first understand the flow of a surgical procedure. Every surgery is built on a series of different phases, each of which are divided into tasks based on specific steps (Figure 4). A surgeon’s job in the OR is to perform a series of steps in order to complete tasks in different phases of a procedure. After fulfilling all steps within each task and phase, the surgery is said to have been completed.

Figure 4.

Divisions of a surgery. The tasks and phases can be done in tandem or can be partially achieved and completed following completion of another task.

The following scale (Table 4), adapted from the levels of automation in aviation, may be used to address the role of automation and autonomy in surgery as it encompasses every type of technology. It proposes a description of each level of automation, taking into consideration the division of a surgery into phases, tasks and steps.

LevelDescriptionSupervision
0: Complete Human autonomy
  • The surgeon performs all steps of in every task.

The surgeon is in complete control
1: Task Assistance
  • The system executes a specific step of a task.

The surgeon is in complete control.
2: Task Automation
  • The surgeon delegates execution of multiple steps of a task to one or more systems.

The surgeon monitors performance of the system during the execution of the specific steps.
The system requires active permission from the surgeon to advance to next step.
3: Phase Automation
  • The surgeon delegates most steps of multiple tasks to the system.

  • The surgeon performs a limited set of actions in support of the tasks.

The surgeon monitors performance of the system and responds if intervention is requested/required by the system.
The system reviews its own work in order to advance to the next step.
4: Full Autonomy
  • The surgeon delegates execution of all steps of a task in any phase to the system.

  • The system can manage most steps of the task under most conditions.

The surgeon actively supervises the system and has full authority over the system.
5: Complete system autonomy
  • Execution of all steps of a task in all phases is done by an automated system.

  • The system can manage all steps of the task under complicated conditions.

The surgeon passively monitors performance of the system.

Table 4.

Levels of automation in surgery.

5.2 The P.A.D. taxonomy—A novel scale for automation and autonomy in the OR

Every surgical procedure is completed based on a series of perceptions, actions and decisions made by the surgeon. These three different duties are important aspects of surgery and must be included in the conversation regarding AI and its application in surgery (Table 5).

System
PerceptionActionDecision
Level 0: Complete Human autonomy
Level 1: Task AssistanceThe system has the ability of basic sensing.The system performs a step in a specific task.The system may give basic warnings
Level 2: Task AutomationThe system has the ability of general phase, tool and anatomy recognition.The System performs multiple steps of a task within a phase.The system understands current step and reacts accordingly.
Level 3: Phase AutomationThe system recognizes most phases, tools, and anatomical variables.
The system can detect abnormal events.
System can perform most tasks within a phase.The system understands current task, can predict next actions and react accordingly.
Level 4: Full AutonomyThe system can identify every aspect of a procedure under most conditions.The system can perform all tasks of every phase in a procedure under most conditions.The system has full understanding of current phase under most conditions. It plans and reacts accordingly.
Level 5: Complete AutonomyThe system recognizes every aspect and abnormal event of a procedure under any condition.The system can perform all tasks of every phase of a procedure under any condition.The system has full understanding of every aspect of the procedure and its variables. It plans and reacts according to any event under any condition.

Table 5.

The PAD (perception, action, decision) scale for surgical autonomy.

Perception refers to the recognition of variables in the surgical environment. Surgeons do this instinctively using their senses. Systems sense using sensors like cameras with computer vision, heat detectors, impedance measurements, etc., to convert data from a physical environment into a computational system. As an example, basic bipolar devices transfer a fixed amount of energy through the target tissue for as long as it is activated by the surgeon regardless of the state of the tissue. While using the basic bipolar it is important for the surgeon to use their own senses to see that the tissue appears to have undergone coagulation in order to stop applying energy and prevent inadvertent injury. Over-activation after the tissue has already been coagulated will create a different path of energy transfer that could damage nearby tissues. Advanced bipolar devices, in contrast, sense the tissues impedance, regulating the amount of energy dispensed, and automatically discontinuing the activation when the tissue is coagulated.

Action refers to the maneuvers performed in order to execute a task. Surgeons perform actions depending on their perception of a specific scenario. Basic tools and systems can perform actions without having the ability to sense. Advanced systems have the ability to perform an action depending on what they sense. The advanced bipolar device, for example, acts to continue energy or stop it according to its own perception (sensing).

Decision refers to the capability of reaching a conclusion after considering different variables. Advanced systems can give real-time feedback to the surgeon during a procedure, either passively in the form of alerts, warnings and suggestions, or actively in the form of whole system halts, or action restrictions. For example, an advanced laparoscopic stapler can sense the cartridge type inserted to the device as well as the distance and physical resistance between its two jaws. When the stapler is ready to fire, if these variables exceed the stapler’s ability, it makes the decision not to fire.

With this taxonomy, one can describe easily the level of AI autonomy by combining each section into a shortened form. As such, the current standard of care is at P1A1D2, because although AI is not yet commercially available, we do have tools like advanced devices that perform certain actions autonomously. Applying the scale to these commercially available devices, we can say that advanced bipolar devices are a Level 1 automated systems as they measure the impedance of a tissue to automatically decide when a cycle is completed. A procedure using this device would therefore be characterized as a P1A1D1 procedure. Smart staplers such as the Signia™ would also be a Level 1 system and a surgeon using it would also be performing a P1A1D2 procedure. As current technologies are further developed with the evolution of AI into more clinical applications, procedures at the level of P2A1D3 may in fact be in our near future.

It is important to note that according to these Levels of Autonomy in Surgery, the responsibility still always falls upon the surgeon, independently of the amount of control and relative autonomy that the system has. The natural path of the debate in the field will bring surgeons (and healthcare professionals in general) to reach a consensus on the amount of control we are willing to give up for the whether it should be ethical and legal for a surgeon to actually relinquish control and autonomy to the point where the burden of responsibility should not be placed on them.

5.3 Will AI replace surgeons?

As with any industry, the perceived threat of AI taking control and pushing away human involvement holds true in medicine. Although at the peak of the hype of AI in radiology and pathology many experts predicted that humans would soon be replaced by machines in these fields, they quickly revised their opinions, with the realization that rather than replacement, the technology had arrived to augment their field’s possibilities [67]. This is true in the surgical field as well, and as part of the adoption of AI, surgeons will have to adapt training methods to include these new systems. Not as a way of replacing, but as a way of augmenting the surgeon’s capabilities. As such, it is imperative that surgeons understand the capabilities and limitations of the technology, that they know how to use it and problem solve with it, with enough exposure during their training to feel comfortable adding it to their bag of armament. More importantly, as the technology advances it remains imperative that the surgeons retain the ability to perform a surgery with all the necessary tasks safely, even without the use of automated systems. This is particularly important when faced with ensuring safety of patients: Imagine the problematic hypothetical scenario of surgeon who is unable to perform a cholecystectomy due to lack of the ability to recognize the triangle of safety because they rely solely on AI. Conversely, imagine the exciting scenario where a surgeon who is trained to recognize the triangle of safety can utilize AI tools to augment its visualization in a patient with complex anatomy, bringing an added benefit to the patient and the surgeon themselves.

Fundamentally, it is possible to continue to build on the basis of the surgeon’s knowledge while maintaining control and delegating specific tasks to AI in order to augment their capabilities, not replace them. As long as the human understands the capabilities and limitations of an AI system as laid out above, the loss of control is thereby mitigated.

It cannot be stressed enough that medicine is a profession of empathy. As physicians we consider more than just the patient’s diagnosis in order to propose an appropriate treatment and management. Surgeons must weigh the patient’s prognosis, social support system, risks involved in surgery, and patient expectations in order to propose the best treatment. Moreover, during surgery we make an immeasurable amount of decisions and subsequent actions based on the unique patient laying on our table. We cannot say that AI has the ability to consider a patient’s environment, desires and expectations, nor can we say that it is machine-proof, but the potential for an AI system with the ability to make such decisions with empathy remains only a theoretical concept for now.

Advertisement

6. Conclusion

The goal of this chapter was to present the factors that both humans and machines face in the evolution of surgery, as well as the balance needed to have a fruitful collaboration. As the field of artificial intelligence has been catapulted into the medical field with many new innovations, the transformation of the medical field is inevitable. The question of how AI technology will affect the surgical profession has become pivotal, as the technology continues to grow, finding new ways to benefit surgeons and patients alike. AI should be viewed not as a threat, rather as another tool in the surgeon’s armament for augmenting their skills, further benefiting the patients. The challenge facing AI integration into the operating room are not simple, but as presented herein, we already have some AI available at our fingertips. In the chapter we proposed a novel taxonomy scale encompassing every type of technology that could one day be used in the OR in a comprehensive manner. The PAD taxonomy for Surgical Autonomy may help to bring more awareness to surgeons. With a simple method for stratification of AI, surgeons may begin to feel more confident and be more willing to adopt newer options by understanding what they are utilizing.

Questions remain with regards to the legality and ethics of AI in surgery, specifically with regards to autonomy and task delegation, which may take time to understand and develop. As with any innovation, it is imperative to continue discussions within the surgical community to find the ideal way of collaboration between surgeons and advanced AI systems, to ensure a beneficial partnership.

References

  1. 1. Riskin DJ, Longaker MT, Gertner M, Krummel TM. Innovation in surgery: A historical perspective. Annals of Surgery. 2006;244(5):686-693. DOI: 10.1097/01.sla.0000242706.91771.ce
  2. 2. Responsible surgery [Internet]. AAO-HNSF Bulletin. 2016. Available from: https://bulletin.entnet.org/home/article/21246793/responsible-surgery [Accessed: February, 2023].
  3. 3. Randle RW, Ahle SL, Elfenbein DM, Hildreth AN, Lee CY, Greenberg JA, et al. Surgical trainees’ sense of responsibility for patient outcomes: A multi-institutional appraisal. Journal of Surgical Research. 2020;255:58-65
  4. 4. What is the job description for surgeons? [Internet]. ACS. Available from: https://www.facs.org/for-medical-professionals/education/online-guide-to-choosing-a-surgical-residency/guide-to-choosing-a-surgical-residency-for-medical-students/faqs/job-description/ [Accessed: February, 2023]
  5. 5. Principles Underlying Perioperative Responsibility [Internet]. ACS. Available from: https://www.facs.org/about-acs/statements/principles-underlying-perioperative-responsibility/ [Accessed: February, 2023].
  6. 6. Patient Safety in the Operating Room: Team Care [Internet]. ACS. Available from: https://www.facs.org/about-acs/statements/patient-safety-in-the-operating-room/ [Accessed: February, 2023].
  7. 7. Lee TC, Reyna C, Shah SA, Lewis JD. The road to academic surgical leadership: Characteristics and experiences of surgical chairpersons. Surgery. 2020;168(4):707-713. DOI: 10.1016/j.surg.2020.05.022
  8. 8. Gauderer MW. Creativity and the surgeon. Journal of Pediatric Surgery. 2009;44(1):13-20. DOI: 10.1016/j.jpedsurg.2008.10.006
  9. 9. Wilson CB. Adoption of new surgical technology. BMJ. 2006;332(7533):112-114. DOI: 10.1136/bmj.332.7533.112
  10. 10. Miller DC, Wei JT, Dunn RL, Hollenbeck BK. Trends in the diffusion of laparoscopic nephrectomy. Journal of the American Medical Association. 2006;295(21):2476
  11. 11. Adapted from: A graph of Everett Rogers Technology Adoption Lifecycle model. Distributed under the GNU Free Documentation License. Available from: https://commons.wikimedia.org/wiki/File:DiffusionOfInnovation.png
  12. 12. Barrenho E, Miraldo M, Propper C, Walsh B. The importance of surgeons and their peers in adoption and diffusion of innovation: An observational study of laparoscopic colectomy adoption and diffusion in England. Social Science & Medicine. 2021;272:113715. DOI: 10.1016/j.socscimed.2021.113715
  13. 13. Barkun JS, Aronson JK, Feldman LS, Maddern GJ, Strasberg SM, Balliol Collaboration, et al. Evaluation and stages of surgical innovations. Lancet. 2009;374:1089-1096. DOI: 10.1016/S0140-6736(09)61083-7
  14. 14. What Is Artificial Intelligence (AI)? [Internet]. Google Cloud. Available from: https://cloud.google.com/learn/what-is-artificial-intelligence [Accessed: February, 2023].
  15. 15. Chen J, Sun J, Wang G. From unmanned systems to autonomous intelligent systems. Engineering. 2022;12:16-19. DOI: 10.1016/j.eng.2021.10.007
  16. 16. Shevlin H, Vold K, Crosby M, Halina M. The limits of machine intelligence: Despite progress in machine intelligence, artificial general intelligence is still a major challenge. EMBO Reports. 2019;20(10):e49177. DOI: 10.15252/embr.201949177
  17. 17. Saghiri AM, Vahidipour SM, Jabbarpour MR, Sookhak M, Forestiero A. A survey of artificial intelligence challenges: Analyzing the definitions, relationships, and evolutions. Applied Sciences. 2022;12(8):4054. DOI: 10.3390/app12084054
  18. 18. Buttazzo G. Artificial consciousness: Hazardous questions (and answers). Artificial Intelligence in Medicine. 2008;44(2):139-146. DOI: 10.1016/j.artmed.2008.07.004
  19. 19. Mitchell TM. Machine Learning. New York: McGraw-Hill; 1997. p. 414
  20. 20. What are neural networks? [Internet]. IBM. Available from: https://www.ibm.com/topics/neural-networks [Accessed: February, 2023].
  21. 21. Deng L, Yu D. Deep learning: Methods and applications. Foundations Trends Signal Processing. 2014;7(3–4):197-387. DOI: 10.1561/2000000039
  22. 22. What is Natural Language Processing? [Internet]. IBM. Available from: https://www.ibm.com/topics/natural-language-processing [Accessed: February, 2023].
  23. 23. Huang T. Computer vision: Evolution and promise. 19TH CERN School of Computing. 1996:21-25. DOI: 10.5170/CERN-1996-008.21
  24. 24. Jackson P. Introduction to Expert Systems. Harlow, England: Addison-Wesley; 1999
  25. 25. How Netflix’s Recommendations System Works [Internet]. Netflix Help Center. Available from: https://help.netflix.com/en/node/100639 [Accessed: February, 2023].
  26. 26. Find the right treatment for every patient [Internet]. Owkin. Available from: https://owkin.com/ [Accessed: February, 2023].
  27. 27. AlphaZero: Shedding new light on chess, shogi, and Go [Internet]. Deepmind. Available from: https://www.deepmind.com/blog/alphazero-shedding-new-light-on-chess-shogi-and-go [Accessed: February, 2023].
  28. 28. PhysIQ [Internet]. PhysIQ. Available from: https://www.physiq.com/ [Accessed: February, 2023].
  29. 29. Autopilot [Internet]. Tesla Autopilot. Available from: https://www.tesla.com/autopilot [Accessed: February, 2023].
  30. 30. Healthcare AI. [Internet]. Aidoc. Available from: https://www.aidoc.com/ [Accessed: February, 2023].
  31. 31. Conversational AI/Natural-language processing [Internet]. Amazon Science. Available from: https://www.amazon.science/research-areas/conversational-ai-natural-language-processing [Accessed: February, 2023].
  32. 32. Sonabend A, Cai W, Ahuja Y, Ananthakrishnan A, Xia Z, Yu S, et al. Automated ICD coding via unsupervised knowledge integration (UNITE). International Journal of Medical Informatics. 2020;139:104135. DOI: 10.1016/j.ijmedinf.2020.104135
  33. 33. Google Lense [Internet]. Google. 2022. Available from: https://lens.google/howlensworks/ [Accessed: 2023].
  34. 34. Klein S, Gildenblat J, Ihle MA, Merkelbach-Bruse S, Noh KW, Peifer M, et al. Deep learning for sensitive detection of helicobacter pylori in gastric biopsies. BMC Gastroenterology. 2020;20(1):417. DOI: 10.1186/s12876-020-01494-7
  35. 35. The AiTax Corporation [Internet]. AiTax. Available from: https://www.aitax.com/ [Accessed: February, 2023].
  36. 36. Clinical Decision Support [Internet]. IBM Watson. Available from: https://www.ibm.com/watson-health/solutions/clinical-decision-support [Accessed: February, 2023].
  37. 37. Artificial Intelligence and machine learning (AI/ml)-enabled medical Devices [Internet]. U.S. Food and Drug Administration. FDA; Available from: https://www.fda.gov/medical-devices/software-medical-device-samd/artificial-intelligence-and-machine-learning-aiml-enabled-medical-devices [Accessed: February, 2023].
  38. 38. US Food and Drug Administration. Computer-assisted detection devices applied to radiology images and radiology device data—Premarket notification [510 (k)] submissions. Available from: https://www.fda.gov/regulatory-information/search-fda-guidance-documents/computer-assisted-detection-devices-applied-radiology-images-and-radiology-device-data-premarket
  39. 39. Watanabe AT, Lim V, Vu HX, Chim R, Weise E, Liu J, et al. Improved cancer detection using artificial intelligence: A retrospective evaluation of missed cancers on mammography. Journal of Digital Imaging. 2019;32:625-637. DOI: 10.1007/s10278-019-00192-5
  40. 40. Hassan C, Balsamo G, Lorenzetti R, Zullo A, Antonelli G. Artificial intelligence allows leaving-in-situ colorectal polyps. Clinical Gastroenterology and Hepatology. 2022;20(11):2505-2513. DOI: 10.1016/j.cgh.2022.04.045
  41. 41. Golan D, Shalitin O, Sudry N, Mates J. AI-powered stroke triage system performance in the wild. Journal of Experimental Stroke & Translational Medicine. 2020;12:3
  42. 42. Torrente M, Sousa PA, Hernández R, Blanco M, Calvo V, Collazo A, et al. An artificial intelligence-based tool for data analysis and prognosis in cancer patients: Results from the Clarify study. Cancers. 2022;14(16):4041. DOI: 10.3390/cancers14164041
  43. 43. Richens JG, Lee CM, Johri S. Improving the accuracy of medical diagnosis with causal machine learning. Nature Communication. 2020;11(1):3923. DOI: 10.1038/s41467-020-17419-7
  44. 44. Chorin E, Hochstadt A, Schwartz AL, Matz G, Viskin S, Rosso R. Continuous heart rate monitoring for automatic detection of life-threatening arrhythmias with novel bio-sensing technology. Frontiers in Cardiovascular Medicine. 2021;8:707621. DOI: 10.3389/fcvm.2021.707621
  45. 45. Hornbrook MC, Goshen R, Choman E, O’Keeffe-Rosetti M, Kinar Y, Liles EG, et al. Early colorectal cancer detected by machine learning model using gender, age, and complete blood count data. Digestive Diseases and Sciences. 2017;62:2719-2727. DOI: 10.1007/s10620-017-4722-8
  46. 46. Weissler EH, Naumann T, Andersson T, Ranganath R, Elemento O, Luo Y, et al. The role of machine learning in clinical research: Transforming the future of evidence generation. Trials. 2021;22(1):1-5. DOI: 10.1186/s13063-021-05571-4
  47. 47. Loftus TJ, Tighe PJ, Filiberto AC, Efron PA, Brakenridge SC, Mohr AM, et al. Artificial intelligence and surgical decision-making. JAMA Surgery. 2020, 2020;155(2):2. DOI: 10.1001/jamasurg.2019.4917
  48. 48. Kite-Powell J. Artificial Intelligence Set To Dominate Operating Rooms By 2024 [Internet]. Forbes 2020. Available from: https://www.forbes.com/sites/jenniferhicks/2020/06/25/artificial-intelligence-set-to-dominate-operating-rooms-by-2024/ [Accessed: February, 2023].
  49. 49. Mintz Y, Brodie R. Introduction to artificial intelligence in medicine. Minimally Invasive Therapy & Allied Technologies. 2019;28(2):73-81. DOI: 10.1080/13645706.2019.1575882
  50. 50. Padoy N. Machine and deep learning for workflow recognition during surgery. Minimally Invasive Therapy & Allied Technologies. 2019;28(2):82-90. DOI: 10.1080/13645706.2019.1584116
  51. 51. Mascagni P, Alapatt D, Sestini L, Altieri MS, Madani A, Watanabe Y, et al. Computer vision in surgery: From potential to clinical value. npj Digital Medicine. 2022;5(1):163. DOI: 10.1038/s41746-022-00707-5
  52. 52. Gilbert G. The paradox of managing autonomy and control: An exploratory study. South African Journal of Business Management. 2013;44(1):1-4. DOI: 10.4102/sajbm.v44i1.144
  53. 53. Weerakkody RA, Cheshire NJ, Riga C, Lear R, Hamady MS, Moorthy K, et al. Surgical technology and operating-room safety failures: A systematic review of quantitative studies. BMJ Quality & Safety. 2013;22(9):710-718. DOI: 10.1136/bmjqs-2012-001778
  54. 54. LigaSure technology [Internet]. Medtronic. Available from: https://www.medtronic.com/covidien/en-us/products/vessel-sealing/ligasure-technology.html [Accessed: February 20, 2023].
  55. 55. Signia stapling system [Internet]. Covidien. Available from: https://www.medtronic.com/covidien/en-us/products/surgical-stapling/signia-stapling-system.html [Accessed: February 20, 2023].
  56. 56. Cummings ML, Stimpson A, Clamann M. Functional Requirements for Onboard Intelligent Automation in Single Pilot Operations. AIAA Infotech@ Aerospace; 2016. p. 1652. DOI: 10.2514/6.2016-1652
  57. 57. Barry D. How long do pilots really spend on autopilot? [Internet]. Cranfield. Available from: https://saiblog.cranfield.ac.uk/blog/how-long-do-pilots-really-spend-on-autopilot [Accessed: February, 2023].
  58. 58. Uitp SB. World Report on Metro Automation 2018. Observatory of Automated Metros Brussels. Belgium; 2020
  59. 59. The world’s leading intelligent safety platform. RapidSOS; Available from: https://rapidsos.com/ [Accessed: February, 2023].
  60. 60. Predict prevent crime [Internet]. PredPol. 2018. Available from: https://www.predpol.com/ [Accessed: February, 2023].
  61. 61. Taxonomy and Definitions for Terms Related to Driving Automation Systems for On-Road Motor Vehicles. SAE International. Available from: https://www.sae.org/standards/content/j3016_202104/
  62. 62. Anderson E, Fannin T, Nelson B. Levels of aviation autonomy. In: 2018 IEEE/AIAA 37th Digital Avionics Systems Conference (DASC). London, England: IEEE; 2018 Sep 23. pp. 1-8. DOI: 10.1109/DASC.2018.8569280
  63. 63. IEC–International Electrotechnical Commission. Railway applications–Urban guided transport management and command/control systems. Available from: https://webstore.iec.ch/publication/6777
  64. 64. Yang GZ, Cambias J, Cleary K, Daimler E, Drake J, Dupont PE, et al. Medical robotics—Regulatory, ethical, and legal considerations for increasing levels of autonomy. Science Robotics. 2017;2(4):eaam8638. DOI: 10.1126/scirobotics.aam8638
  65. 65. Battaglia E, Boehm J, Zheng Y, Jamieson AR, Gahan J, Fey AM. Rethinking autonomous surgery: Focusing on enhancement over autonomy. European Urology Focus. 2021;7(4):696-705. DOI: 10.1016/j.euf.2021.06.009
  66. 66. Attanasio A, Scaglioni B, De Momi E, Fiorini P, Valdastri P. Autonomy in surgical robotics. Annual Review of Control, Robotics, and Autonomous Systems. 2021;4:651-679. DOI: 10.1146/annurev-control-062420-090543
  67. 67. Langlotz CP. Will artificial intelligence replace radiologists? Radiology. Artificial Intelligence. 2019;1(3):e190058. DOI: 10.1148/ryai.2019190058

Written By

Gabriel Szydlo Shein, Ronit Brodie and Yoav Mintz

Submitted: 24 March 2023 Reviewed: 06 April 2023 Published: 28 April 2023