Open access

Augmented Reality for Minimally Invasive Surgery: Overview and Some Recent Advances

Written By

Pablo Lamata, Wajid Ali, Alicia Cano, Jordi Cornella, Jerome Declerck, Ole J. Elle, Adinda Freudenthal, Hugo Furtado, Denis Kalkofen, Edvard Naerum, Eigil Samset, Patricia Sánchez-Gonzalez, Francisco M. Sánchez-Margallo, Dieter Schmalstieg, Mauro Sette, Thomas Stüdeli, Jos Vander Sloten and Enrique J. Gómez

Published: 01 January 2010

DOI: 10.5772/7128

From the Edited Volume

Augmented Reality

Edited by Soha Maad

Chapter metrics overview

3,925 Chapter Downloads

View Full Metrics

1. Introduction

The advent of minimally invasive surgical (MIS) revolution has changed the way surgery is practiced. Technological advances in optics, instrumentation, materials, robotics, computer systems, etc, are bringing new means and possibilities to the Operating Room. These advances are conferring considerable advantages on the patient, but they are also imposing an additional difficulty on the physician, who needs to develop new skills and dexterity in order to adapt to a limited workspace. This chapter focuses on activity to facilitate minimally invasive treatment: the development and application of augmented reality (AR) technologies for guidance and navigation during surgical procedures.

The first section describes AR technologies and concepts from a surgical application perspective, and reviews current systems and prototypes. Tracking technologies and systems, image to physical registration methods, visualization strategies and clinical user interfaces are developed and assembled in computer-assisted navigation systems. These solutions are clinically applied in different surgical disciplines, like neurosurgery, interventional radiology and orthopaedics.

The second section presents the use of robotic devices to enhance the surgeons’ capabilities in terms of dexterity and accuracy. The development of haptic feedback in tele-robotics, semi-autonomous robots and robotic systems cross-linked with image data are presented and results discussed.

The third section presents the process and methodology to develop AR systems for surgical applications. It highlights the importance of the multidisciplinary approach to this field of research, which requires the engagement of physicians, engineers and ergonomists. The experience of the ARIS*ER

European Marie Curie Research Training Network for Augmented Reality in Minimally Invasive Surgery. www.ariser.info.

European consortium is presented, providing some valuable lessons learned about workflow centred design and the importance of field studies. The study of human factors, sensorial and cognitive capabilities, is also briefly addressed.

The last section of the chapter describes four recent advances in the field: (1) a new videobased method for tracking surgical tools, which eliminates the extra burden introduced by existing solutions at a cost of some accuracy loss; (2) an advanced visualization strategy by means of novel collision detection feature for enhancing the safety and accuracy of radiofrequency ablation of tumours; (3) the development of the Endoclamp positioning system for minimally invasive cardiac procedures, which is designed to increase the safety of the procedure by providing real-time visualization and control of catheters reducing also the need of radiation exposure; and (4) the clinical adoption of the Resection Map, a navigation system for liver procedures, which efficiently increases the orientation of the surgeon.

Advertisement

2. AR in surgery: technologies, concepts and current systems

Augmented reality (AR) refers to a perception of a physical real-world environment whose elements are merged with (or augmented by) virtual computer-generated stimuli (visual or haptic), creating a mixed reality. While Paul Milgram and his colleagues (Milgram & Kishino, 1994) characterize AR as being a fusion of real and virtual data within a real world environment, Azuma and his colleagues base their definition of AR on the attributes of the AR application (Azuma et al., 2001). In addition to a mixture of real and virtual information, an AR application has to run in real time and its virtual objects have to be aligned (registered) with real world structures. Both of these requirements guarantee that the dynamics of real world environments remain after virtual data has been added. In order to both register the data and fuse virtual and real imagery in real time, special devices implementing a variety techniques are used by today’s AR systems.

2.1. Visual AR display technology

To combine visual information in real time, one of the following three techniques is used in AR display devices. Optical See Through devices project the current rendering of the virtual data onto a semi-transparent mirror (Fig. 1a). This special mirror allows the user to perceive the real world through it while at the same time it passes on the virtual content to the eyes of its user. In contrast, Video See Through devices capture the real world information with a video camera (Fig. 1b). Before the final result is presented to the user, the captured video will be blend with the rendering of the virtual content by the device. Direct Augmentations use projectors and the surfaces of the environment to present the virtual information directly in the 3D real world environment (Raskar et al., 2001).

Display devices can also be distinguished by where they are installed in 3D space. Displays worn on the head (Head Mounted Display or HMD) usually integrate video or optical see through technology into a helmet (Cakmakci & Rolland, 2006). Small carried displays (or Handheld Displays) are equipped with a video camera and a 2D screen supporting video see through systems. Nevertheless, both head mounted and handheld displays depend on additional hardware which is usually either heavy or uncomfortable to wear, or in case of handheld displays suffer from a small physical screen size.

MIS technologies include in many cases the use of miniaturised cameras and tools, like in laparoscopy or arthroscopy. This makes potentially the Video See Through approach the most suitable one for these applications. Nevertheless, percutaneous procedures, like in radiofrequency ablation of liver tumours, can benefit from the other AR visual approaches.

Figure 1.

Techniques for combining visual information (a) Video See Through (b) Optical See Through.

2.2. Registration methods for the alignment of real and virtual realities

The second main requirement for an AR application is a three dimensional registration of its virtual content relative to the real world objects. Therefore, AR systems estimate the position of either the virtual objects relative to the video camera or of both the video camera and the virtual objects relative to a common coordinate system (e.g. to also allow registered augmentations using optical see through devices). Virtual models are generated by image segmentation of medical image studies of the patient (Massoptier & Casciaro, 2008).

Up to now a number of different technologies have been developed to support the registration of 3D objects, ranging from analyses of the real world environment to evaluations of intentionally introduced fiducials. For example, computer vision algorithms are able to compute the position of a video camera directly from the images it delivers to the system (Davison et al., 2007). To be able to compute a 3D position, the algorithms have to select features (landmarks, shades, silhouettes…) directly out of the images and analyze them. Since feature detection and its subsequent processing is a computationally expensive and often noisy operation, some other technologies can help by adding artificial landmarks into the 3D environment (Teber et al., 2009b). Those aids (called fiducials) add a specific stimulus to the environment which is easier and faster to detect and evaluate. A number of different types of stimuli have already been used as sources for fiducial tracking. For example, retro-reflective spheres allow for quick and precise visual identification (see Figure 2). Even audio sources (Doussis, 1993) or magnetic fields (natural as well as syntactically induced) have been used. The downside of adding fiducials to the environment is the artificial modification of the environment as well as the preparation required along with the need for specialized receivers. However, since the operation room allows preparing the environment, most medical AR application make use of the advantages of intentionally introduced tracking targets.

Registration is one of the current main challenges to solve in soft tissue surgery, like laparoscopy, where organs and tissues are deformed, cut, dissected, etc. Automatic tracking and compensation of these changes are required for a stable AR overlay. The greater the differences between the virtual reconstructed models and the real organs of the patient in the OR, the more difficult this challenge is.

2.3. AR systems for MIS

AR and Virtual Reality (VR) applications are capable of not only supporting an intervention itself, but also its preparation and a number of follow up procedures. However, AR is a rather complex aggregate technology, and research on AR-guided treatment is typically targeted at a single phase or aspect in a surgical procedure. Thus, embedding AR into a clinical workflow requires careful design of a software architecture that provides consistent services for image processing, visualization, tracking etc. throughout a variety of situations and requirements.

The challenge here is to provide useful, high-performance components that are also sufficiently flexible to allow re-use across different applications. Componentization also allows more careful testing of components in isolation (unit tests), and makes approval of software components for clinical evaluation more straight forward.

Figure 2.

Tracking using retro-reflective markers. This data is used to register the virtual counterparts of the liver with two vessel trees and a tumour. This AR application supports the insertion of a needle to ablate the tumour (see section 5.2)

An AR system for supporting medical applications has generally three main components:

  • Consistent data models. The requirements of surgical planning, navigation and simulation go beyond the simple display of volumetric data acquired in previous image scans. Anatomical and pathological structures must be explicitly modeled and manipulated. Physicians demand predictable and reproducible results, so all these representations must be kept in a consistent state throughout the medical workflow while permitting arbitrary changes on the data.

  • Real-time data acquisition. In contrast to modalities such as CT or MR, which are normally not acquired in real time, AR applications require the management of streaming input data such as tracking, US or video data. The handling of such data requires real-time algorithms and also careful synchronization, in particular between simultaneously acquired data from heterogeneous sources.

  • Visualization. Compared to conventional screen-based navigation, AR has elevated requirements for visualization. Techniques must be independent of the viewing mode and display type, and must be able to simultaneously display all kinds of data models in real time.

2.4. Application areas

One of the main strengths of AR in medical applications is its ability to overcome difficulties related to hand-eye coordination (Johansson et al., 2001). For example, AR displays are able to present, by means of registration of virtual objects within real world environments, the information exactly where the hands have to act. Figure 3 shows examples of this concept. Another possibility is to bring the support into the 'classical' 2D screen, as done in the support of needle ablation of tumours (see section 5.2).

This section reviews the current main areas of AR applications in surgery. It presents the applications characteristics, and it discusses the data used in the main components of the AR system.

Figure 3.

Overcoming the problem of hand-eye coordination using Augmented Reality (a) Typical examination using sonography. (b) The AR display enables the user to see the data at the same location where his/her hands operate.

Neurosurgery and orthopedics

AR and computed assisted surgery has already found a place in open and minimally invasive procedures (Teber et al., 2009a). Nowadays, these systems are used in neurosurgical, craniomaxillofacial and orthopedic interventions. The use of navigation technologies reports substantial improvement regarding safety, aesthetic and functional aspects in a range of surgical procedures, like dental implantology, arthroscopy of the temporomandibular joint, osteotomies, distraction osteogenesis, image guided biopsies and removals of foreign bodies (Ewers et al., 2005). Applications in this category are able to use a virtual counterpart of hidden real structure which was generated before the intervention was started. Thus, these applications are able to make use of high qualitative 3d models.

Notice, the usage a 3d model, which is generated from pre-operatively acquired volumetric scans (e.g. CT-data) is only possible because no deformation modifies the structure between the scan of the patient (using e.g. CT) and the surgical procedure where the data is visualized. Consequently if rigid structures are in the focus of attention the acquired data almost perfectly represents the intra-operative scenario. This is the reason for the extensive use of AR systems in those applications. The surgical field is composed or framed by rigid structures, which implies an easier alignment between the virtual reconstruction of the patient and the operating field (the registration step).

BrainLab (Feldkirchen, Germany) and Medtronic Navigation (Louisville, USA) demonstrate that applications in this area are even able to move from a research center to an industrial application. Both companies develop commercialized surgical navigators for rigid structures.

Soft tissue surgery

In soft-tissue surgery additional technical challenges exist. Current research focuses on intra-operative deformations, shifting, and topological changes done to the organs (Baumhauer et al., 2008). Due to those additional difficulties, only some research prototypes are able to offer a real-time navigation environment in an immobilized anatomy, displaying tools’ location on preoperative or intra-operative 3D images (Beller et al., 2007; Cash et al., 2007). To be able to present virtual counterparts of real world organs the virtual model has to adapt to the real deformations. Consequently, besides the three main components of a medical AR systems the data model used to generate the AR imagery has to be updated regarding to the current deformation of the organs.

Currently, most soft-tissue navigation systems are focused on liver surgery, due to its clinical relevance. Indeed, liver cancer is one of the most important causes of death (El-Serag & Rudolph, 2007). As discussed before, there is a need of enhancing the prognosis by a better control of resection margins and risk areas (important vessels) during the intervention. The guidance of the needle in tumour ablation procedures, which can be also done under a laparoscopic approach, is a more controlled problem that is quite addressed using AR technology (Maier-Hein et al., 2008). These navigation environments can increase the accuracy of the needle placement ( Stüdeli et al., 2008 ). For a more detailed review on the field of navigation in endoscopic soft tissue surgery the reader is referred to (Baumhauer et al., 2008).

Catheterised interventional procedures

Interventional procedures are minimally invasive procedures in which the medical doctor, typically a radiologist, performs a procedure by means of a small catheter introduced into the blood vessels. Most of the interventional procedures aim the treatment of aneurisms, stenosis and radio-frequency ablations in different anatomical districts.

In most of the cases the procedure is visualized by means of angiographic imaging, allowing having a visualization of both catheter and anatomical structures. Typically this is based on either X-ray fluoroscopy, computed tomography (CT), magnetic resonance imaging (MRI) or Ultrasound (US) Imaging and often uses contrast agent. However, these three classical imaging techniques have the disadvantage of being either bulky, being based on ionizing radiation or having poor resolution.

A valid alternative to those visualization procedures is represented by the AR. Several commercial systems implementing these technologies are currently available. As pure example we report names of commercial system for model generation EnSite NavX (St. Jude), Carto (Biosense Webster), LocaLisa (Medtronic). The systems are based on the combination of three different components. The first component is a tracking device that is able of giving information on position and orientation of a tracked object, the tracking is then integrated with a geometrical model and then visualized with the help of a visualization unit.

There exists on the market different solution for the tracking in interventional environments, using acoustical, inertial, electrical or electromagnetic technology. On the other hand, no interventional catheter procedure takes place before the patient undergoes a pre-procedural scan through means of MRI or CT. To reduce the mental workload, the obtained patient data can be pre-processed by a computer program and a set of geometrical models can be computed automatically. With the adequate visualization technology it is provided an intuitive representation of the catheter location within its environment. Fluoroscopy, providing a real-time visualization has been the gold standard for years, although its application was restricted as radiation exposure has to be minimized.

Advertisement

3. Robotic devices to enhance the surgeons’ capabilities

Compared to open access surgery, MIS imposes a set of constraints that strongly limit the information provided by the natural human senses of sight and touch. In MIS, the organs are accessed by long instruments inserted by small incisions in the patient’s skin (ports). As a result, small tremors are amplified and the forces exerted on the tissue can be only partially transmitted to the tool’s handle (Picod et al., 2005). Ports, which act as fixed pivot points, limit the range of motion of instruments and introduces the fulcrum effect in the manipulation of tools. This makes the hand–eye coordination skill difficult to learn.

During the last decade, robots have been appearing in the operating rooms to overcome some of the current problems in MIS, covering nowadays a wide range of surgical specialties (neurosurgery, orthopaedic surgery, cardiac surgery and urology, among others (Diodato et al., 2004)). Due to disparate characteristics of surgical operations, robots have been used in different modes, ranging from teleoperation to true autonomous robots. In this line, Image-guided surgical robots are the type of robots that presents more degree of autonomy, although this autonomy is restricted to specific tasks within specific procedures. This kind of robots have been mainly applied in neurosurgery (Finlay & Morgan, 2003) (Karas & Chiocca, 2007) and orthopaedic surgery (Kazanzides et al., 1995; Kwon et al., 2001)since bones and the skull are relatively easy to image and the rigidity allows an easy registration between preoperative and intra-operative images (as already discussed in previous section). In abdominal procedures, autonomous robots have been used in applications such as automatic positioning of the laparoscopic camera (Krupa et al., 2003) or percutaneous procedures using visual servoing techniques (Loser & Navab, 2000).

Since several decisions have to be taken during the course of the operation, a more conservative approach is the development of telesurgical systems, where the motions of the surgeon through a master console are reproduced by the slave robot. In this way, the surgeons can make use of the benefits that a robot offers but preserving the human capability to react to any unexpected event. These characteristics make the telesurgical systems very attractive and two commercial systems with FDA approval have been developed until now: Zeus (ComputerMotion Inc., Goleta, CA, USA) and Da Vinci (Intuitive Surgical, Montain View, CA, USA).

A trade-off between image-guided surgical and telesurgical robots are synergetic robots, also called hands-on robots, where the robot is driven by the cooperation of an automatic controller and the surgeon. Synergetic robots can be found in a wide range of medical applications such as pericardial puncture (Schneider & Troccaz, 2001), knee arthroplasty (Ho et al., 1995), retinal surgery (Iordachita et al., 2006) or positioning of pedicle screws (Ortmaier et al., 2006).

This section presents the use of augmented reality and robotic devices to enhance the surgeon’s capabilities in terms of dexterity and accuracy. Haptic feedback in tele-robotics will increase surgeon’s capability of perceiving the tissue properties and of performing more intricate surgical tasks; the use of virtual models cross-linked with image data allow a better visualization of the operating area and the development of high accuracy image guided interventions; finally, haptic feedback and augmented reality can be merged, obtaining haptic guidance tools where virtual forces are generated in order to guide the movements of the surgeon according to a preoperative plan.

3.1. Haptic feedback in telesurgical systems

The word haptic, from Greek ἁπτικός (haptikos), means pertaining to the sense of touch and comes from the verb ἅπτεσθαι (haptesthai) meaning “to contact” or “to touch”. There are two different kind of haptic feedback (Rosenbaum, 1990):

  1. Kinesthetic feedback: This is generated mainly by the proprioceotive receptors and describes the force exerted by a part of the body on the environment; in engineering it can be identified as the exerted force and torque.

  2. Tactile feedback: This is represented mainly by the esteroceptive information and describes the pressure distribution between the body and the environment; in this case, tactile feedback can be described with a pressure map.

There are many surgical tasks that require the surgeon to use the hands to acquire haptic (both kinesthetic and tactile) information from the patient’s body. One of the most important tasks is palpation, which implies the surgeon explores organ’s and anatomical surfaces looking for difference in the tissue stiffness, being harder tissues often diseased; as example, it has been demonstrated that palpation is the best technique to identify, during a surgical procedure, the location of liver metastases in colorectal cancer. The palpation of blood vessels and nerve path is also an important task in order to locate them and avoid accidental resections. Haptic feedback is also important when a resection is carried out, in order to sense the resistance between the blade and the tissue, or when the surgeon pulls the tissue with a forceps, closes a blood vessel with a clamp, or tights a suture knot. In all these cases, it is very likely that the applied force is causing the exceeding of the breaking stress and leading to harmful consequences.

When a minimally invasive procedure is performed with a telesurgical system, the surgeon is physically decoupled from the patient and the haptic feedback can be restored only with artificial devices. In this case, it is necessary to acquire the information on the patient’s side, elaborate the data stream with a processing unit and present the information to the surgeon trough a master console. This represents a technological challenge that includes the development of appropriate master consoles and sensing devices, as well as the control schemes of the overall system. Although research has been very active in this field for long time, a suitable solution has not yet been found.

There exist several commercial master and slave devices for kinesthetic feedback. Whereas in most applications the mounting of force sensors on the robots is easy, it represents a challenge in teleoperated MIS since the reliable measure of the interaction forces requires to place the sensor close to where they are produced, i.e. the tip of the instrument. A robotic instrument for MIS typically has a diameter of less than 10mm, and today there are no commercially available force sensors of comparable size. Also, sterilizability and disposability are difficult to achieve with force sensors, for technical reasons and cost reasons respectively.

Hence, an interest in force estimation emerges as a potential substitute technique. The estimation of interaction forces between a robot and its environment must necessarily be based on information from other types of sensors. The most common approach is to estimate interaction forces from the knowledge of the dynamics of the robot and the measurement/estimation of position, velocity and acceleration (Hacksel & Salcudean, 1994; Smith et al., 2006; Naerum et al., 2008). The success of this approach depends on our ability to accurately identify and compute the dynamic parameters of the robot. A different approach is to consider the behaviour of the robot’s environment instead of the robot during interaction. In this case, the knowledge of the environment’s dynamics is required, together with a sensor system to measure displacement. For example, a vision system can measure the deformation of the environment as the robot applies a force to it, and the interaction force is computed with the help of a known force-displacement relationship (Kennedy et al., 2002; Gaponov et al., 2008). Again, force estimation performance relies on the accuracy of the dynamic model of the environment.

Different are the technological bottlenecks for the tactile feedback, the research in this field is still in its primitive step trying to build the fundamental hardware: tactile sensors and displays with performances suitable for medical application (Peeters et al., 2008). Particularly interesting are the solutions developed for the tactile sensing, among others there is the example of a sensor based on piezo-resistive rubber (Figure 4.a) (Goethals et al., 2008). The novel idea in a tactile sensor based on tactile data extrapolation from intraoperative ultrasound images, in an early stage of prototype development as shown in Figure 4.b, has an interesting potential (Sette et al., 2007).

Figure 4.

Tactile feedback technology. (a) Elastoresistive tactile sensor (Goethals et al., 2008). (b) Ultrasound probe for tactile feedback (Sette et al., 2007)

3.2. Image-guided surgical robots

The advances in medical imaging technology allow the visualization of anatomic structures with a high degree of accuracy. By means of computer image processing and modelling, the location of the pathologies and essential structures can be revealed and presented to the clinician in a suitable form. This preoperative information can be fully exploited by guiding the movements of a robot in order to augment the overall precision and accuracy of the surgical intervention (orthopaedic and neurosurgery being the main surgical areas of application).

As it was shown in the previous chapter, the use of AR can provide clinicians with interactive 3D visualizations in all phases of the treatment. Besides the visualization tools, AR can be used together with robotic systems in order to develop path-planning algorithms that automatically calculate the optimal surgical plan (Kazanzides et al., 1995)

Once the registration of the image data with the position of the patient and robot is properly done, AR can be useful to monitor the movements of the robot inside the body and to detect deviations between the real position and the preoperative plan.

Figure 5.

Augmented reality tools for preoperative planning and correlation with the intraoperative robot.

Figure 5 shows an example of using AR for increasing the accuracy of a Zeus telesurgical robot when it executes tasks autonomously (Cornellà et al., 2008). A virtual liver model was used as preoperative information and then correlated with the intra-operative robot. In this case, the errors in the registration, the noise in the signals provided by the tracking system and the inaccuracies in the kinematics chain of the robot, were corrected by means of an adaptive control algorithm based on a Kalman filter, obtaining a final accuracy much better than the own accuracy of the robot alone. This demonstrates that AR can provide real benefits to image guided surgical robots.

3.3. Augmented reality for haptic guidance generation

Teleoperated systems with haptic feedback allow the user to feel the contact forces between the slave manipulator and the remote environment. Additionally, the user may feel some virtual forces generated from a virtual model with the objective of guiding his movements and help him to complete the task successfully. This approach, which is known as haptic guidance or virtual fixtures, is more conservative than a true autonomous robot, since in this case the user has the control of the robot, but it is a step further than haptic feedback, since his/her movements are guided according to a preoperative plan. In this case, virtual environments and augmented reality tools can be used to detect the deviations between the real position and the preoperative plan and to generate the guiding forces to the user. Figure 6 shows a conceptual diagram of a teleoperated system with haptic guidance and the relations between its elements.

Figure 6.

A basic teleoperated system is made up by the master haptic device and the slave robot. In this case, the slave robot follows the movement of the surgeon, who feels the interaction forces from the remote size through the haptic device. A haptic guidance system introduces new elements: first, a preoperative planning based on medical images and AR in order to guide the movements of the surgeon; second, a tracking system to monitor the position of the robot; finally, force generation algorithms based on AR models that compute the appropriate force according the preoperative plan and the real position of the robot. AR models can also be used to provide another visual feedback to the surgeon of the movements of the robot inside the body.

Haptic guidance has been proven as a very effective technique for increasing the performance of teleoperated systems. Among other benefits, it increase speed and precision, reduce operator workload and the effects of time delays (Rosenberg, 1995) (Sayers & Paul, 1994). Haptic guidance has also been used in several telesurgical applications, ranging from limiting the movements of the manipulator into restricted regions (Payandeh & Stanisic, 2002) to increase the accuracy in microsurgery (Bettini et al., 2004).

The constraints that restrict the movements of the user can be defined taking into account different aspects. The first one, and the most obvious, is considering the Cartesian position of the slave manipulator and the task to be performed (Turro et al., 2001). In this case, the haptic guidance can be divided in three basic forms: if the movements of the slave are constrained in all the three degrees of freedom, then the slave manipulator must remain in a fixed point; constraining the movements in two degrees of freedom, the operator can move the slave manipulator along a line; finally, if just one degree of freedom is constrained, then it can move over a surface. In addition to the number of degrees of freedom, the forces can be either attractive or repulsive. Attractive forces drive the operator towards the constraint and can be useful to increase the accuracy of the procedure. Repulsive forces restrict to be outside certain zones, which can increase the safety of the operation by means of defining forbidden regions where the robot is not able to move.

Instead of considering the Cartesian position, the constraints can be based on other parameters involved in a teleoperated system, like the relative velocity between master and slave devices, which increase the coordination between master and slave movements (Nuño & Basañez, 2006), or limits and singularities of the devices, which increase the safety in the slave side, especially when the kinematic configuration of the master and slave devices are very different (Turro et al., 2001).

Advertisement

4. Multidisciplinary user centred design process

Development and design processes of novel computing systems with integrated Augmented and Virtual Reality Technology (AR/VR-technology) in the medical domain is complex and highly iterative, generally involving multiple disciplines and partners. In this section we share experiences gathered during an exemplary development project over four years: the European research training network “Augmented Reality in Surgery” (ARIS*ER) aimed to develop next generation novel imaging guidance (augmented reality built from various modalities, such as ultrasound, MRI and video-endoscopy) and cross linked robotic systems (automatic control loops guided by radiological data of the patient) to improve minimally invasive interventions and surgery. The technologies involved are: image processing, image fusion, interactive 3D visualization and navigation, robotics and haptics. “Through this research, a group of young researchers is being trained to work internationally and multidisciplinary. The team is working across the boarders of medical interventions, information and communication technology development, and user interface design.” (www.ariser.info)

The ARIS*ER consortium consisted of over 40 researchers and eight partner institutions. Over 4 years 10 PhD candidates and 6 PostDoc researchers worked on “building blocks” addressing key technological problems as well as the integration of these different modules into different systems (demonstrators and showcases). The aim was to develop, in multidisciplinary teams for each medical case, systems of specific surgical navigation and information support with augmented reality technology.

In this section we will discuss three important aspects of user centred design of surgical support tools, as learned in our ARIS*ER project. (1) These systems are meant to enable better medical procedures. Therefore these supports will influence the procedures themselves. In order to get the most wanted effect of the technological possibilities, the procedures themselves have to be redesigned as well, parallel and in relation to the development of their supportive tools (section 4.1). (2) A multidisciplinary co-design approach is advised, involving multiple engineering disciplines, medical experts from different fields, as well as Ergonomics / Human Factors Engineering (section 4.2). (3) To be able to increase safety and efficiency of MIS approaches it is important to conduct during the development process ergonomic studies to identify requirements for design (e.g. analyses of current tasks), as well as evaluations of proposed new information and actuation tools and new related procedures (section 4.3).

4.1. Workflow centred design

One important aspect of the user centred approach within ARIS*ER was the systems approach in the investigations on future user needs and the context analysis. The intervention suit or operation theatre has been treated as a (work) system with technology components, procedures of usage, procedures of handling complications (problem solving), team roles and tasks. Investigations started by studying the actual system with current user, task, context, current technological solutions, current problems and desires for change as well as the current workflow.

A medical workflow can be described as a logical and effective sequence of tasks that follows surgical routines and rules. Medical procedures traditionally follow detailed protocols that are built on practical and scientific experiences. These protocols try to maximize the benefits and minimize the risks. But protocols are more rules than reality and are not binding on details, e.g., in case of complications the surgeon might need to deviate from the protocol or his foreseen planed procedure. Surgical tasks might therefore change substantially. Clinical judgement is complex, and based on extensive knowledge, only partially journalized. Considerable effort is actually undertaken to investigate surgical workflows to better understand these surgical routines but also to build the base for the development of intelligent surgical information systems (Neumuth et al., 2006).

Within ARIS*ER we implicitly used some of these well-known methodologies for investigating clinical workflows, like surgical process descriptions (Neumuth et al., 2006) and task analysis according to ISO 13407 (Stüdeli et al., 2007). Also multiple visualization styles such as storyboards, flow-charts and tables were used but our prime communication tools were matrixes and storyboards. The Workflow Integration Matrix (WIM) was proposed in ARIS*ER as a effective framework aimed to assist the analysis of user requirements and surgical problem solving processes as well as communication between surgeons, technology engineers and designers of a multidisciplinary team (Jalote-Parmar et al., 2007).

Within ARIS*ER the consideration of the workflow was part of the user centred approach. Recorded actual medical workflows have been used to derive specific user needs and offered valuable context information. Additionally visualizations of workflows on tables and storyboards were used to transfer medical know-how to technical engineers (easier than documented verbal medical protocols). Ergonomics/Human Factors (HF) experts were in charge of streamlining the parallel design and development of technology and work procedures. Workflow centred design can have its focus on a re-design of clinical workflow (optimized to the technology) or just on carefully considering existing clinical workflows in the design of the supportive tool.

To conclude: A continuous consideration of workflow aspects during design process can support the process in many ways. This applies not only for entire technical work systems like the ‘operation room of the future’ (Cleary et al., 2005) but also to supportive tools on a smaller scale. Visualization of medical workflows can offered an easy access to the medical field for the engineers and gives a natural insight in the dynamics of the work system. Knowledge about actual workflow is also knowhow about clinical needs (user requirements). And last but not leased the optimal design solution might be in the combination of new technology solution and an adapted or re-designed workflow.

4.2. The ARI*SER multidisciplinary example

The ARIS*ER development and design process was both user driven and technology driven. The collaboration between the researchers was organized according to the co-design approach, as depicted in Figure 7. Speed of getting a prototype out had priority, following ‘action research’ approach. In this approach prototypes were compiled from the various technical domains and integrated, based on user guidance, which is known as ‘participatory design’.

Parallel to the developments of these sub-systems Ergonomics/HF specialists investigated the medical procedure of the study cases to specify requirements for the envisioned target

Figure 7.

The co-design approach in ARIS*ER (Augmented reality in surgery - EU research training network). In this approach we intended to bridge the gap between user and technology developer (Jalote-Parmar et al., 2007). The measures are two fold. Human Factors specialists are involved to bridge the gap. Also direct interaction between users and technology developers take place. In both situations the users are involved as co-designers. (Adapted figure from (Freudenthal et al., 2007))

systems. They guided and designed user interfaces and performed ergonomic interventions, including workflow driven design. Clinical users were involved to guide the design work: clinical experts conducted clinical monitoring. Many clinicians were consulted from within ARIS*ER consortium partners and from outside.

This mixture of technology driven (technology components) and user driven methods (application projects) speeded up the investigation and development. This approach allowed us to reach animal and patient testing in a limited time. It solves one of the common problems in participatory design, which is that medical users encounter a lot of difficulty envisioning what a new non-existing technology will look like and behave in actual use. They need presentations to (imagine the) experience of actual interaction. Actual prototypes, even ones that are faulty in many respects, allow them to experience the interactions. It allows them to envision adapted versions they would like and comment on problems with the current proposal. These comments can then be processed by the HF specialists and used in next versions. The requirement list and the solutions quality develop quicker this way. The more and the earlier prototype test rounds can be run - the better.

To conclude: The co-design approach in ARIS*ER included partners from all essential technology component domains, medical users and human factors/design. Both components and integrated demonstrators were developed - in parallel. User input came from field studies ‘up front’, but was also based on evaluative studies of demonstrators and prototypes. Since users have problems envisioning the possibilities of novel technologies, prototypes are crucial to gather user feedback and speed up development.

4.3. Human factors studies

Human Factors is a synonym of Ergonomics (HF/Ergonomics). Having his roots traditionally as “science of work”, today Human Factors (engineering) covers all areas of human life. The most commonly used definition is from the council of the International Ergonomics Association (IEA)

IEA is the parent organization of the national associations. Definition of Ergonomics (August 2000): http://www.iea.cc/

“Ergonomics (or human factors) is the scientific discipline concerned with the understanding of interactions among humans and other elements of a system, and the profession that applies theory, principles, data, and other methods to design in order to optimize human well-being and overall system performance. Ergonomists contribute to the design and evaluation of tasks, jobs, products, environments and systems in order to make them compatible with the needs, abilities and limitations of people.”

The discipline HF/Ergonomics is divided broadly into three domains: Physical Ergonomics, Cognitive Ergonomics (engineering psychology) and Organizational Ergonomics (macroergonomics), each of which can be found relevant to product design in the medical domain. Methods are often manifold and not limited to one specific domain but combine elements different scientific disciplines. In order to increase MIS safety and efficiency as well as to reduce workload HF/Ergonomics interventions are conducted.

Most technology developers are well aware of the general aim of HF/Ergonomics, which is to meet user/clinical needs and human limitations. Most developers indeed do (occasionally) visit a doctor or have an early demonstrator / prototype evaluated. But the majority still has a limited view on HF/Ergonomics methods, scope and possibilities. The role of the ergonomist in large design and development projects is generally weakly defined. A common mistake in ICT related development and design processes is to limit HF/Ergonomics methods to usability evaluations that are widely and successfully used in practice. Usability research is indeed an important and powerful method, but it focuses only on the conduct of a proper test to evaluate use-related requirements of a product or prototype such as learnability (ease-of-use), efficiency, memorability, errors, and satisfaction in use (Nielsen, 1994; International Standards, 1999). In addition to this HF/Ergonomics methods also aim to generate a thorough understanding of the usage context (Stüdeli & Alexander, 2008) and through this also provide guidance in finding solutions.

In ARIS*ER HF/Ergonomics was meant to guide the development work in the three applications. Secondly, methodological knowledge on ergonomics of complex medical systems had to be developed. Two main ergonomic challenges were addressed (1) How to analyze surgical procedures in a way that decisions for the design of supportive technology but also workflow issues can be treated (see section 4.1 on clinical workflow design) (2) How to best introduce ergonomic evaluations and usability tests into the development and design process? The second question can not be generally or finally answered. However our experiences within the ARIS*ER project show some main issues for the ergonomic evaluation of complex medical systems with AR/VR technology:

  • Selection of evaluation criteria. General work characteristics of MIS are demands on information processing and communication (team work), high workload (duration, intensity) as well as high demands in accuracy and prudence. Therefore evaluations focussing on accuracy, cognitive load, fatigue and safety aspects were chosen (Stüdeli et al., 2007).

  • Ergonomic evaluations should already be prepared from the first analysis of the work system (section 4.1). Analyses should not be restricted to the actual procedures but also consider technological opportunities of the future and potential new workflows. This approach is in line with a common designers approach to work with ‘dream scenario’ (Stüdeli & Freudenthal, 2009)

  • Usability testing of prototypes or existing products and ergonomic evaluations can be combined successfully. Prototyping phase with user studies and task simulations can then be e.g. used to refine evaluation criteria. Working with (early) prototypes allows the further development of the usability metrics during development process. The integration of prototyping activities in the evaluation process can concretize and refine know-how on the context-of-use (Stüdeli & Freudenthal, 2009).

  • In order to guide a complex design and development process efficiently, scope of ergonomic evaluation has to be handled flexible. Analysis and interventions on the level of the work systems as well as on a single (but important) interaction between the user and the system are needed.

In ARIS*ER the prime focus was on decision support, but equally important is teaching support. Due to the high demands on fine-motor and cognitive skills, also training of surgeons (users) is an issue of immense importance. New developments on MIS therefore have to consider aspects of surgical training. For example, the analysis of the perception of surgical interaction forces is of key importance for the design of training systems (i.e. Virtual Reality simulators) and for understanding the development of perceptual surgical skills (Lamata et al., 2008a).

To conclude: HF/Ergonomics bridges different disciplines with different methodological approaches such as psychology, engineering, design and medical. Usability research can and should be used in all stages of the design. Thereby knowledge of the work system will increase in time and evaluation criteria will develop parallel to the design and development process.

Advertisement

5. Some recent advances

5.1. Video-based tracking of surgical tools

Operating rooms are overloaded of systems and technology, while efficient workflows and use of space are mandatory. Current AR systems rely on additional hardware systems with different limitations, as described in section 2 of this chapter. Here, an alternative to existing systems for tracking surgical tools is described, which is based on an analysis of the surgical video signal.

The challenge of the approach is to extract the 3D position and orientation of a rigid cylindrical tool from the 2D information of the surgical scenario captured by endoscope. Concretely, proposed method exploits the model and properties of a perspective image analysis applied to the cylindrical shape of tools, allowing the assessment of the instruments’ position and orientation. Proposed approach can be decomposed in solving two main problems: (1) extraction of relevant 2D information from the image through segmentation techniques, and (2) estimation of 3D coordinates of the tool with this information. The first step is, more specifically, to segment the contours of surgical tools and to localise the tip location in the image (see Figure 8.a). Different image processing techniques can be applied for obtaining a fast and robust method for near real time applications (Voros et al., 2007). It is also possible to introduce colour markers fixed on the instrument to facilitate image processing stage (Tonet et al., 2007).

The second step is to determine the 3D tool position through geometrical equations elaborated on the analysis of the projection of the instrument in the image plane (see Figure 8.b). Segmented tool edges and camera field of view (FoV) define a tool 3D orientation, and the tip’s 3D position is determined by its image 2D coordinates and the tool’s physical diameter (5 or 10 mm usually in laparoscopic tools).

Figure 8.

Video-based tracking of surgical tools (Cano et al., 2008). (a) Image processing for extracting 2D information (b) Model of the laparoscopic tool used for calculate the 3D position and orientation of the tools; C: optical center, P: tip of the tool; N: point of the cylinder axis which projective line is perpendicular to this axis; Ω1, Ω2: planes of sight of the tool edges; Π plane: image plane; E1, E2: projective tool edges.

Current tracking performance of proposed method is sufficient for gesture analysis and an objective evaluation of surgical manoeuvres, with an accuracy of around 3 mm (Cano et al., 2008). The main benefit of this approach is the lack of extra elements which disturb surgeon’s performance in the clinical routine, reducing the complexity and cost of physical tracking instruments.

5.2. Advanced guidance of radio frequency ablation needles

Radio-frequency ablation (RFA) has become an important minimally invasive treatment for liver cancers. RFA can be performed on inoperable hepatic tumors, both primary and metastatic. The procedure is performed under conventional ultrasound image guidance which leaves the interventionist with the challenging task of correlating pre-operative findings with intra-operative imaging while navigating the RFA probe to the target tumor. This problem is tackled with a computer aided surgery system called ARIS*ER RFA (Ali et al., 2009), which supports a complete interventional workflow including pre-operative, intra-operative and post-operative visualization and fusion of different imaging modalities for guidance of RFA interventions.

The system can perform complex image data visualization and fusion, virtual navigation, image and RFA needle calibration, and registration for inserting the RFA probe as accurately as possible in a single workflow during ablation of liver tumors percutaneously (see Figure 9 for different example views supported by ARI*SR RFA system). The ARIS*ER RFA system was developed using the Studierstube Medical framework (Kalkofen & Schmalstieg, 2006), which is useful for developing both Virtual Reality (VR) and Augmented Reality (AR) applications for medical procedures due to its flexible and modular design.

Within the development of the ARIS*ER RFA system two directions of interfaces were explored, and both interfaces were subject to user testing. One was explicitly based on WIM (section 4.1): three flat screens (US, CT and in the middle a fusion of the two) (Jalote-Parmar et al., 2009)., and the second based on WIM and a cognitive model of (needle) navigation strategies: a 2D screen with three needle related slices and a 3D scene as seen in Figure 9.b ( Stüdeli et al., 2008 ), but without image fusion.

Figure 9.

Example views of the capabilities of the ARIS*ER RFA system. a) First demonstrator (Kalkofen et al., 2007), b) Visualization concept 2D/3D ( Stüdeli et al., 2008 ), c) Visualization concept HMD (Kalkofen et al., 2009), d) Fusion of US and 3D model generated from CT

(Jalote-Parmar et al., 2009).

The developed system also introduces a novel Collision Detection (CD) feature in order to avoid hitting major vessels and vital organs (E.g. ribs) with the RFA probe (Morvan et al., 2008). It also developed and integrates an intra-operative image to tracker space registration, as well as realtime tracked Ultra-Sound (US) video texture for data fusion and for ultrasound augmentation. Although the framework is specifically made to support percutaneous RFA interventions, it could also be used for other needle biopsy procedures. The developed system could also be used as a surgical simulator to train novice interventionists using abdominal phantoms. In addition, ARIS*ER RFA system is also capable to support augmented ultrasound thus the interventional radiologist could easily find a target tumor and the augmented tip of the RFA needle in an ultrasound plane.

5.3. Endoclamp positioning system

In minimally invasive mitral valve surgery the heart has to be stopped and the aorta has to be sealed (clamped) to isolate the heart from the rest of the circulation. Unlike in the open chest procedure, the aorta cannot be clamped from the outside. This can be done with an endoclamp (Edwards Lifesciences, Irvine,USA), a catheter with an inflatable balloon at its tip. Once inflated in the aortic arch, the balloon provides the required sealing (Grossi et al., 2000).

This technique (Port-AccessTM –PA– technique) is used nowadays as a standard procedure in several hospitals worldwide but it presents two main difficult steps: initial placement, and monitoring of balloon migrations. Initial placement is normally done using Trans- Esophageal Echography (TEE) as visual guidance with good results, but it is a hard task with a long learning curve mainly due to difficulties in maneuvering (difficult to control the balloon while there is still blood flow) and visualizing the balloon (with TEE it is only possible to see the balloon on a very small section of the artery) (Gulielmos et al., 1998; Aybek et al., 2000). Monitoring the balloon position during the surgery is even a harder challenge as TEE is unusable (there is air inside the heart and abdomen) so it is done indirectly through comparison of arterial pressures. Monitoring is extremely important as there can be damage to the aortic valve or to the central nervous system (even resulting in death) as a result of migration. Better monitoring and control of balloon position is needed to provide a safe and uncomplicated sealing of the aorta in this type of surgery.

To cope with these difficulties, a combined information and positioning system was developed within ARIS*ER, which is based on augmented reality technology and robotics (Furtado et al., 2010). The system was designed specifically for minimally invasive cardiac surgery. It provides constant, real-time monitoring of balloon position during the entire procedure, automatic position control to a specified target and automatic balloon pressure control. We believe that such a system helps overcome some of the most important difficulties in the PA technique and has the potential to make it the technique of choice for minimally invasive cardiac surgery. The system was developed using user centred design techniques where surgeons, engineers and human factor specialists were involved in all the development phases: concept, design, implementation and testing (Stüdeli & Freudenthal, 2009), as described in section 4.

Figure 10.

Endoclamp positioning system developed in ARIS*ER (Furtado et al., 2010). a) User interface of the system with explanation of the key concepts and b) and a close up of the target region (right).

The balloon position is measured in real time with an electromagnetic (EM) tracking system where the EM sensor coil is placed inside the balloon. Using this measurement, a model of the balloon is superimposed on a 3D rendered model of the patient's thorax, showing its actual position inside the vessel at all times. Figure 10 shows the complete user interface, with two views of the anatomy and position of the balloon following the EM measurements. Using the buttons, the user can define a target (green lines and yellow plane) and the tolerance (red lines) for the balloon position. When control is on, the system places the balloon automatically in the target but the user also has the option to do everything manually. In this application, the deformation of the structures of interest is small, thus, a point-based rigid registration algorithm is enough to align tracking data with the rendered model.

A robotic inserter was custom designed to be able to push and pull the catheter inside the vessel (see Figure 11.b). As explained before, the user defines a target position and the robotic catheter inserter places and maintains the balloon in this position, based on the EM sensor measurements. If there is a migration, the inserter automatically takes care of repositioning in the target location. The pressure inside the balloon is also automatically controlled to a defined target pressure using pressure sensor measurements and estimations taking into account the dynamics of the system catheter-balloon-aorta (Sette et al., 2009).

Figure 11.

a) Endoclamp balloon and b) catheter driver (Sette et al., 2009).

So far, the system was tested several times in different scenarios. The first group of tests were performed on a silicon phantom with correct aortic anatomy (Figure 12.a). The users performed several insertions with three types of visual support: looking directly inside the aorta, with the 3D view of the system and with a simulation of the view the surgeons currently have (simulated TEE). The users place the catheter faster and more accurately with the 3D view than with the simulated TEE view. The results are comparable to those obtained by looking directly inside the transparent aorta of the phantom. This has shown that the system gives back to the users the visual information they have lost by performing the surgery with the closed chest. Experiments on two animals (pigs) were also performed with the purpose of simulating a normal surgical workflow using the system in close-to-real conditions (Figure 12.c). The system could effectively be used to guide the insertion of the catheter even in such a rough setting.

The proposed system presents clear benefits regarding the current situation where balloon position management is done with poor visual support. It provides a clear and intuitive notion of the balloon’s position and corrects positioning errors automatically. This eases up strenuous monitoring tasks and catheter handlings and reduces work rhythm brakes of the surgeon during actual surgery at the heart. More extensive tests will be performed but for the time being we believe that this application has the potential to make the technique safer and simpler reducing the learning curve for the surgical teams and effectively increasing the safety of the procedure.

Figure 12.

User studies subjects and datasets obtained: a, b) silicon phantom c, d) pig.

5.4. The resection map for guidance in hepatectomies

Surgical demand exists for computer aided surgical systems in both open and laparoscopic liver resections. Surgeons expect orientation and visualization support during operations that allow for a more accurate and secure execution of the planned operation, especially in non anatomical resections. The Resection Map (see Figure 13) is a solution that addresses this clinical need of intraoperative navigation for safer liver resections. It is an interactive 3D carthograpy of the patient’s anatomy, a system for simplified and effective visualization of the critical structures and the path that has been preoperatively planned for the resection.

The concept of the Resection Map is somehow similar to the use of a navigation system while driving a car, but without the positioning information, without knowing the corresponding location of the tools in the map. Our strategy is to harness the rich preoperative planning information during the surgical procedure with an intuitive cartography, and without the need of any additional hardware or equipment. The system thus relies on the surgeon’s capacity to perform a mental alignment between the Resection Map and the operating field. A detailed description of the design process and concept of this system can be found in (Lamata et al., 2008b).

Its integration in the Operating Room is seamless, and preliminary results show a perceivedincrease in the safety and confidence of the surgeon (Lamata et al., 2009a). We believe that the Resection Map could be very helpful for the education of inexperienced liver surgeons, for the adoption of a laparoscopic approach, for an easier implantation of a living donor programme, and for the complex cases of an experienced surgeon. The tool could even substitute some of the uses of the intraoperative US, like the identification of the key vessels that are going to be cut in the resection. Nevertheless, this imaging modality will be still required for the verification of the position and size (possible growth) of known tumours and the identification of new ones.

Research efforts are also being conducted towards the registration (alignment) of the virtual reconstruction of the liver with the patient’s anatomy under the laparoscopic view. The objective is to do so without any additional hardware, as done in the localization of tools (see section 5.1), by exploiting the video information. Some preliminary promising results are presented in (Lamata et al., 2009b).

Figure 13.

Three liver resection interventions assisted by the Resection Map. (a) Close detailed view of the Map, (b) laparoscopic and (c) open procedures. (Lamata et al., 2009a).

Advertisement

6. Conclusions

Surgery is evolving towards a safer minimally invasive approach, driven by different technological advances. Augmented reality systems are gradually being adopted by surgeons to support their orientation and improve their accuracy, like the example of the Resection Map for the support of hepatectomies. There are also several AR prototypes, like the Endoclamp Positioning System and the ARIS*ER RFA system, with a great potential to reduce errors and increase safety in MIS heart clamping and needle ablations/biopsies respectively. The development and adoption of AR technologies is one of the main drivers in today’s surgical revolution.

Research efforts in this field are directed in several directions. One crucial aspect is the reduction of the technological burden in the OR, with solutions like the tracking of surgical tools based in video analysis. Another is to find the scientific and technological grounds to provide haptic and tactile feedback in robotic systems. And one of the most difficult challenges is to solve the problem of organ deformation and shift during soft tissue surgery. And last but not least, it is necessary to highlight the importance in this field of research of a fluent and coordinated multidisciplinary dialogue and effort in the R&D team. User centred design techniques, where surgeons, engineers and human factor specialists are involved in all the development phases (concept, design, implementation and testing), are strongly advised.

References

  1. 1. Ali W. Morvan T. Risholm P. et al. 2009 A visualization and fusion system for image guided RFA procedures. Int J Image & Graphics, Vol. (in press).
  2. 2. Aybek T. Dogan S. Wimmer-Greinecker G. et al. 2000 The micro-mitral operation comparing the port-access technique and the transthoracic clamp technique. Journal of Cardiac Surgery, 15 1 76-81.
  3. 3. Azuma R. Baillot Y. Behringer R. et al. 2001 Recent advances in augmented reality. Ieee Computer Graphics and Applications, 21 6 34-47, 0272-1716.
  4. 4. Baumhauer M. Feuerstein M. Meinzer H. P. et al. 2008 Navigation in endoscopic soft tissue surgery: Perspectives and limitations. Journal of Endourology, 22 4 751-766, 0892-7790.
  5. 5. Beller S. nerbein M. Lange T. et al. 2007 Image-guided surgery of liver metastases by three-dimensional ultrasound-based optoelectronic navigation. British Journal of Surgery, 94 7 866-875, 00071323 (ISSN).
  6. 6. Bettini, A., Marayong, P., Lang, S ., et al. (2004). Vision-Assisted Control for Manipulation Using Virtual Fixtures. IEEE Transactions on Robotics, Vol. 20, No. 6, 953-966.
  7. 7. Cakmakci O. Rolland J. 2006 Head-Worn Displays: A Review. J. Display Technol., 2 3 199-216.
  8. 8. Cano A. Gayá F. Lamata P. et al. 2008 Laparoscopic Tool Tracking Method for Augmented Reality Surgical Applications. Lecture Notes in Computer Science, 5104 191-196.
  9. 9. Cash D. M. Miga M. I. Glasgow S. C. et al. 2007 Concepts and preliminary data toward the realization of image-guided liver surgery. J Gastrointest.Surg., 11 7 844-859.
  10. 10. Cleary K. Kinsella A. Mun S. K. 2005 OR 2020 workshop report: Operating room of the future. International Congress Series, 1281 832-838.
  11. 11. Cornellà J. Elle O. J. Ali W. et al. 2008 Intraoperative Navigation of an Optically Tracked Surgical Robot. Lecture Notes in Computer Science, 5242 587-594.
  12. 12. Davison A. J. Reid I. D. Molton N. D. et al. 2007 MonoSLAM: Real-time single camera SLAM. Ieee Transactions on Pattern Analysis and Machine Intelligence, 29 6 1052-1067, 0162-8828.
  13. 13. Diodato M. D. Prosad S. M. Klingensmith M. E. et al. 2004 Robotics in surgery. Current Problems in Surgery, 41 752-810.
  14. 14. Doussis E. 1993 An Ultrasonic Position Detecting System for Motion Tracking in Three Dimensions, Tulane University.
  15. 15. El -Serag H. B. Rudolph K. L. 2007 Hepatocellular carcinoma: epidemiology and molecular carcinogenesis. Gastroenterology., 132 7 2557-2576.
  16. 16. Ewers R. Schicho K. Undt G. et al. 2005 Basic research and 12 years of clinical experience in computer-assisted navigation technology: a review. Int J Oral Maxillofac Surg, 34 1 1-8, 0901-5027 (Print).
  17. 17. Finlay P. A. Morgan P. 2003 Pathfinder Image Guided Robot for Neurosurgery. Industrial Robots: An International Journal, 30 1 30-34.
  18. 18. Freudenthal A. Pattynama P. M. T. Samset E. et al. 2007 Technology assessment as guidance in product development for real time radiology based information for the surgeon. In: Novel Technologies for Minimally Invasive Therapies. S. Casciaro and E. Samset (Ed.), 180 188 , Lupiensis Biomedical Pubs., 978-88-902880-0-5, Lecce, Italy.
  19. 19. Furtado H. Stüdeli T. Sette M. 2010 Endoclamp balloon visualization and automatic placement system. Proc. of SPIE Medical Imaging, San Diego, CA, USA.
  20. 20. Gaponov I. Ryu J. H. Choi S. J. et al. 2008 Telerobotic system for cell manipulation. Proc. of IEEE/ASME Int Conf on Advanced Intelligent Mechatronics, AIM, 165 169 , Xi’an, China.
  21. 21. Goethals P. Sette M. M. Reynaerts D. et al. 2008 Flexible elastoresistive tactile sensor for minimally invasive surgery. Haptics: Perception, Devices and Scenarios, Proceedings, 5024 573-579, 0302-9743.
  22. 22. Grossi E. Ribakove G. Galloway A. et al. 2000 Minimally invasive mitral valve surgery with endovascular balloon technique. Operative Techniques in Thoracic and Cardiovascular Surgery, 5 176-189.
  23. 23. Gulielmos V. Dangel M. Solowjowa N. et al. 1998 Clinical experiences with minimally invasive mitral valve surgery using a simplified port access(TM) technique. European Journal of Cardio-thoracic Surgery, 14 2 141-147.
  24. 24. Hacksel P. J. Salcudean S. E. 1994 Estimation of environment forces and rigid-body velocities using observers. Proc. IEEE Int Conf on Robotics and Automation, 931 936 , San Diego, CA, USA.
  25. 25. Ho S. C. Hibberd R. D. Davies B. L. 1995 Robot Assisted Knee Surgery. IEEE Engineering in Medicine and Biology, 14 3 292-300.
  26. 26. International Standards. O. 1999 ISO 13407. Human Centred Design Process for Interactive Systems.
  27. 27. Iordachita I. Kapoor A. Mitchell B. et al. 2006 Steady-Hand Manipulator for Retinal Surgery. MICCAI, Medical Robotics Workshop, 66 73 , Copenhaguen, Denmark.
  28. 28. Jalote-Parmar A. Badke-Schaub P. Ali W. et al. 2009 Cognitive processes as integrative component for developing expert decision-making systems: A workflow centered framework. Journal of Biomedical Informatics, Vol. (in press).
  29. 29. Jalote-Parmar A. Pattynama P. M. T. Goossens R. H. M. et al. 2007 Bridging the gap: A design approach to developing technological solutions to support minimally invasive surgeries. In: Novel Technologies for minimally invasive therapies. S. Casciaro and E. Samset (Ed.), 93 101 , Lupiensis Biomedical Publications, Lecce, Italy.
  30. 30. Johansson R. S. Westling G. Backstrom A. et al. 2001 Eye-Hand Coordination in Object Manipulation. J. Neurosci., 21 17 6917-6932, 1529-2401.
  31. 31. Kalkofen D. Mendez E. Schmalstieg D. 2009 Comprehensible visualization for augmented reality. IEEE Transactions on Visualization and Computer Graphics, 15 2 193-204.
  32. 32. Kalkofen D. Milko S. Massoptier L. et al. 2007 "ARIS*ER Demo Radio Frequency Ablation." from http://www.ariser.info/projects/rfa_demo.php
  33. 33. Kalkofen D. Schmalstieg D. 2006 An Augmented Reality Framework for Medical Interventions, Minimally Invasive Therapies & Novel Embedded Technology Systems, ARISER Summer School Textbook.
  34. 34. Karas C. Chiocca E. A. 2007 Neurosurgical robotics: a review of brain and spine applications. Journal of Robotics Surgery, 1 39-43.
  35. 35. Kazanzides P. Mittelstadt B. D. Musits B. L. et al. 1995 An integrated system for cementless hip replacement. IEEE Transactions on Biomedical Engineering, 14 3 307-313.
  36. 36. Kennedy C. W. Hu T. Desai J. P. 2002 Combining haptic and visual servoing for cardiothoracic surgery. Proc. IEEE Int. Conf. of Robotics and Automation, 2002. ICRA’02., 2106 2111 , 0-7803-7272-7, Washington, DC, USA.
  37. 37. Krupa A. Gangloff J. Doignon C. et al. 2003 Autonomous 3-D Positioning of Surgical Instruments in Robotized Laparoscopic Surgery Using Visual Servoing. IEEE Transactions on Robotics and Automation, 19 5 842-853.
  38. 38. Kwon D. S. Yoon Y. S. Lee J. J. et al. 2001 ARTHROBOT: a new surgical robot system for total hip arthroplasty. Proc. of IEEE/RSJ Int. Conf on Intelligent Robots and Systems, 1123 1128 , 0-7803-6612-3, Maui, HI, USA.
  39. 39. Lamata P. Gomez E. J. Lamata F. et al. 2008a Understanding perceptual boundaries in laparoscopic surgery. IEEE Trans.Biomed.Eng, 55 3 866-873.
  40. 40. Lamata P. Jalote-Parmar A. Lamata F. et al. 2008b The Resection Map, a proposal for intraoperative hepatectomy guidance. Int.J of CARS, 3 3 299-306.
  41. 41. Lamata P. Lamata F. Sojar V. et al. 2009a Use of the Resection Map as guidance during hepatectomy. Surg.Endosc., Vol. In press.
  42. 42. Lamata P. Morvan T. Reimers M. et al. 2009b Addressing shading-based laparoscopic registration. Proc. Medical Physics and Biomedical Engineering World Congress 2009, pp. (in press), Munich, Germany.
  43. 43. Loser M. H. Navab N. 2000 A New Robotic System for Visually Controlled Percutaneous Interventions under CT Fluoroscopy. Proc. MICCAI., 887 896 .
  44. 44. Maier-Hein L. Tekbas A. Seitel A. et al. 2008 In vivo accuracy assessment of a needlebased navigation system for CT-guided radiofrequency ablation of the liver. Medical Physics, 35 12 5385-5396, 0094-2405.
  45. 45. Massoptier L. Casciaro S. 2008 A new fully automatic and robust algorithm for fast segmentation of liver tissue and tumors from CT scans. European Radiology, 18 8 1658-1665.
  46. 46. Milgram P. Kishino F. 1994 Taxonomy of mixed reality visual displays. IEICE Transactions on Information and Systems, E77-D , 12 1321-1329, 09168532 (ISSN).
  47. 47. Morvan, T., Reimers, M . and Samset, E. (2008). High Performance GPU-based Proximity Queries using Distance Fields. Computer Graphics Forum, Vol. 27, No. 8, 2040-2052, 1467-8659.
  48. 48. Naerum E. Cornellà J. Elle O. J. 2008 Contact force estimation for backdrivable robotic manipulators with coupled friction. Intelligent Robots and Systems, 2008. IROS 2008. IEEE/RSJ International Conference on, 3021 3027 , 978-1-42442-057-5 Nice, France.
  49. 49. Neumuth T. Durstewitz N. Fischer M. et al. 2006 Structured recording of intraoperative surgical workflows. Progress in Biomedical Optics and Imaging- Proceedings of SPIE, 61450A
  50. 50. Nielsen J. 1994 Usability Engineering. San Francisco, USA, Morgan Kaufmann.
  51. 51. Nuño E. Basañez L. 2006 Haptic guidance with force feedback to assist teleoperation systems via high speed networks. IFR/DGR International Symposium on Robotics, 213
  52. 52. Ortmaier T. Weiss H. Hagn U. et al. 2006 A Hands-On-Robot for Accurate Placement of Pedicle Screws. Proc IEEE Int Conf on: Robotics and Automation, 2006. ICRA 2006., 4179 4186 , 0-7803-9505-0.
  53. 53. Payandeh S. Stanisic Z. 2002 On Application of Virtual Fixtures as an Aid for Telemanipulation and Training, 18 23 , Symposium on Haptic Interfaces for Virtual Environment and Teleoperator Systems.
  54. 54. Peeters K. Sette M. Goethals P. et al. 2008 Design considerations for lateral skin stretch and perpendicular indentation displays to be used in minimally invasive surgery. Haptics: Perception, Devices and Scenarios, Proceedings, 5024 325-330, 0302-9743.
  55. 55. Picod G. Jambon A. C. Vinatier D. et al. 2005 What can the operator actually feel when performing a laparoscopy? Surgical Endoscopy and Other Interventional Techniques, 19 1 95-100, 0930-2794.
  56. 56. Raskar R. Welch G. Low K. L. et al. 2001 Shader Lamps: Animating Real Objects With Image-Based Illumination. Eurographics Workshop on Rendering, 89 102 .
  57. 57. Rosenbaum D. A. 1990 Human Motor Control, Academic Press.
  58. 58. Rosenberg L. B. 1995 The use of Virtual Fixtures to Enhance Operator Performance in Time Delayed Teleoperation, Armstrong Laboratory.
  59. 59. Sayers C. Paul R. 1994 An Operator Interface for Teleprogramming Employing Synthetic Fixtures. Presence, 3 4 309-320.
  60. 60. Schneider O. Troccaz J. 2001 A six-degree-of-freedom passive arm with dynamic constraints (PADyC) for Cardiac Surgery application: preliminary experiments. Computer Aided Surgery, 6 340-351.
  61. 61. Sette M. D’hooge J. Langeland S. et al. 2007 Tactile feedback in minimally invasive procedures using an elastography-based method. Int J CARS, S504
  62. 62. Sette M. Furtado H. Stüdeli T. et al. 2009 Physiological Parameters Based Control Scheme For Automatic Intravascular Balloon Inflation. Proc. Medical Physics and Biomedical Engineering World Congress 2009, pp. (in press), Munich, Germany.
  63. 63. Smith A. C. Mobasser F. Hashtrudi-Zaad K. 2006 Neural-network-based contact force observers for haptic applications. IEEE Trans. Robot., 22 6 1163-1175.
  64. 64. Stüdeli T. Alexander T. 2008 Actual ergonomic research in applied virtual and mixed reality systems- with a special focus on navigation and control aids in 3D. In: Minimally Invasive Techs. & Nanosystems for Diagnosis & Therapies. S. Casciaro and E. Samset (Ed.), 1 11 , Lupiensis Biomedical Pubs., 978-88-902880-2-9, Lecce, Italy.
  65. 65. Stüdeli T. Freudenthal A. 2009 Defining the role of the Ergonomist in the development of medical mixed reality systems- Aspects to be considered during the development of enabling technology in the operation theater. 17th World Congress On Ergonomics IEA 2009, Beijing, China.
  66. 66. Stüdeli T. Freudenthal A. de Ridder H. 2007 Evaluation Framework of Ergonomic Requirements for Iterative Design Development of Computer Systems and their User Interfaces for Minimally Invasive Therapy. 8th International Conference on Work With Computing Systems- WWCS 2007 Stockholm, Sweden.
  67. 67. Stüdeli T. Kalkofen D. Risholm P. et al. 2008 Visualization tool for improved accuracy in needle placement during percutaneous radio-frequency ablation of liver tumors- art. 69180B Medical Imaging 2008: Visualization, Image-Guided Procedures, and Modeling, Pts 1 and 2, 6918 B9180-B9180, 0277-786X.
  68. 68. Teber D. Baumhauer M. Guven E. O. et al. 2009a Robotic and imaging in urological surgery. Current Opinion in Urology, 19 1 108-113, 0963-0643.
  69. 69. Teber D. Guven S. Simpfendorfer T. et al. 2009b Augmented Reality: A New Tool To Improve Surgical Accuracy during Laparoscopic Partial Nephrectomy? Preliminary In Vitro and In Vivo Results. European Urology, 56 2 332-338, 0302-2838.
  70. 70. Tonet O. Thoranaghatte R. U. Megali G. et al. 2007 Tracking endoscopic instruments without a localizer: A shape-analysis-based approach. Computer Aided Surgery, 12 1 35-42, 1092-9088.
  71. 71. Turro N. Khatib O. Coste-Maniere E. 2001 Haptically augmented teleoperation. Proc. Ieee Int Conf on Robotics and Automation, 386 392 , 1050-4729.
  72. 72. Voros S. Long J. A. Cinquin P. 2007 Automatic detection of instruments in laparoscopic images: A first step towards high-level command of robotic endoscopic holders. International Journal of Robotics Research, 26 11-12 , 1173-1190, 0278-3649.

Notes

  • European Marie Curie Research Training Network for Augmented Reality in Minimally Invasive Surgery. www.ariser.info.
  • IEA is the parent organization of the national associations. Definition of Ergonomics (August 2000): http://www.iea.cc/

Written By

Pablo Lamata, Wajid Ali, Alicia Cano, Jordi Cornella, Jerome Declerck, Ole J. Elle, Adinda Freudenthal, Hugo Furtado, Denis Kalkofen, Edvard Naerum, Eigil Samset, Patricia Sánchez-Gonzalez, Francisco M. Sánchez-Margallo, Dieter Schmalstieg, Mauro Sette, Thomas Stüdeli, Jos Vander Sloten and Enrique J. Gómez

Published: 01 January 2010