Open access peer-reviewed chapter - ONLINE FIRST

Augmented Reality in Kidney Cancer

By Keshav Shree Mudgal and Neelanjan Das

Submitted: June 29th 2018Reviewed: October 5th 2018Published: December 31st 2018

DOI: 10.5772/intechopen.81890

Downloaded: 180

Abstract

Augmented reality(AR) is the concept of a digitally created perception that enhances components of the real-world to allow better engagement with it. Within healthcare, there has been a recent expansion of AR solutions, especially in the field of surgery. Traditional renal cancer surgery has been largely replaced by minimally invasive laparoscopic (or robotic) partial nephrectomies. This has meant loss of certain intra-operative experiences such as haptic feedback and AR can aid this replacement with enhanced visual and patient-specific feedback. The kidney is a dynamic organ and current AR development has revolved around specific surgical stages such as safe arterial clamping and perfecting tumour margins. This chapter discusses the current state of AR technology in these areas with key attention to the aspects of image registration, organ tracking, tissue deformation and live imaging. The chapter then discusses limitations of AR, such as intentional blindness and depth perception and provides potential future ideas and solutions. These include inventions such as AR headsets and 3D-printed renal models (with the possibility of remote surgical intervention). AR provides a very positive outcome for the future of truly minimally invasive renal surgery. However, current AR needs validation, cost evaluation and thorough planning before being safely integrated into everyday surgical practice.

Keywords

  • augmented reality
  • AR
  • nephrectomies
  • partial nephrectomies
  • image registration
  • surface registration
  • organ tracking
  • tissue deformation
  • renal artery clamping
  • safe selective arterial clamping
  • precise tumour margin
  • live imaging
  • virtual reality
  • AR headset
  • 3D printing

1. What is augmented reality?

The term “augmented reality” was coined by the Boeing researcher, Thomas Caudell in 1990 to describe a projection of digital graphics onto a physical working space for use by aircraft engineers [1]. Augmented reality (AR) has come a long way since, however the fundamental idea remains the same: working in a real-world environment where the components of the environment are enhanced by a digitally-created perception. This perception can include multiple sensory inputs including visual, auditory, haptic, somatosensory and olfactory senses.

In healthcare, AR progression has involved a wide range of medical areas—from aiding clinic appointments by easy access to electronic health records and patient times, to wearable glasses that help teach life-skills to children on the autism spectrum [2]. However, it seems that the biggest expansion of AR is seen in enhancing surgical procedures. Project DR is one such development—internal organs as 3D reconstructions of the patient’s anatomy are projected onto the patient’s skin and this allows a constant view of the person’s anatomy that moves with patient in real time. This is achieved by the amalgamation of CT/MRI imaging, motion-sensing infra-red sensors and projectors all working as one unit [3]. Another example is the use of Microsoft’s HoloLens glasses for trauma and plastic surgeries (Imperial College London and St Mary’s Hospital). The “hologram” (made from pre-op imaging) through the lens of the glasses projects onto the patient’s skin and allows a “mixed reality” which lets the surgeon track the pathways of the various blood vessels and bones to be operated upon [4]. This technology promises to let surgeons carefully plan and execute breast reconstruction surgeries in the future. Google glass is another extensively used example of AR. The ‘glasses’ allow an augmented field of view and surgeons have used these for all purposes from navigation tools to display ultrasound imaging, to remote videoconferencing in intraoperative communication [5].

2. Augmented reality vs. virtual reality?

Whilst augmented reality is technology that overlays on the reality that already exists around us, virtual reality (VR) is a complete replacement of the real world with a simulated one. This means that AR allows us to interact and work with the real world, whilst getting an enhanced input from an informed digital world.

VR has shown to be a great teaching tool. Moglia et al. [6] found that subjects trained on virtual simulators were better than the control group (using conventional methods). An example is the Uro Tainer, a validated simulator for teaching transurethral resection of bladder tumours [7]. VR has not only shown promise in surgery, but also other areas like simulation of shock trauma centres (where surgeons can be trained in high pressured environments) [8] and Virtual Environment for Radiotherapy Training (VERT)—a system built to reduce anxiety in breast cancer patients [9].

3. VR in kidney cancer

Within renal cancer, a VR system has been developed by Rai et al. that enhances the novice’s ability to localise renal tumour margins [10]. Specific to nephrectomies, Makiyama et al. [11] have developed a VR “rehearsal” simulator for surgeons that plans for anatomical abnormalities and incorporates haptic feedback for pre-operative training. Ueno et al. have developed VR addressing another aspect of nephrectomies—reducing postoperative urine leakage by predicting open urinary tracts on preoperative 3D CT—reconstructions [12]. Whereas, VR might be a great teaching tool in simulated nephrectomies, AR is the platform for practise of medicine and surgery—on real people.

4. Augmented reality in renal cancer

In renal cancer surgery, open nephrectomies for the most part, have been replaced by minimally invasive laparoscopic surgeries. This has led to many positive outcomes, including decreased intra-op blood loss and shorter hospital stay [13]. Furthermore, partial nephrectomies have shown an overall improved survival over radical nephrectomies, [14] and this has been made possible due to crucial development in laparoscopic and robotic-assisted surgery [15]. However, there have also been some drawbacks. One of these is the loss of haptic feedback that would usually allow the surgeon to manoeuvre intra-operatively and make instinctive decisions. AR can aid the loss in this feedback sense by replacing it with an enhancement in another—the visual sense.

AR systems exist that allow surgeons to see detailed anatomical structures on the surface of the organ by projecting pre-operative CT/MRI images onto a live laparoscopic video. This allows the view of the patient-unique renal anatomy, it’s neighbouring structures and its relation to the rest of the intra-abdominal anatomy [16]. Having this added information can aid the surgeon in planning and executing an accurate and precise partial or total nephrectomy. Exact areas to be incised can be planned and damage to nearby delicate structures such as renal vasculature and ureters can be reduced. AR can also reduce excision margins—to spare as many well-functioning nephrons and reduce the risk and progression of chronic renal insufficiency [17] (Figure 1).

Figure 1.

An illustration showing the basic components involved in AR. “the basic method is to superimpose a computer generated image on a real-world imagery captured by a camera and displaying the combination of these on a computer, tablet PC or a video projector. The main advantage of AR is that the surgeon is not forced to look away from the surgical site as opposed to common visualisation techniques.” Adapted from: ‘Recent Development of Augmented Reality in Surgery: A Review’ [20].

For an AR system to be ideal, the full length of a surgical procedure need to be “augmented.” This requires 3 essential features (as adapted by a recent review by Hughes-Hallet et al. [18]): image registration, organ tracking and adapting to intra-operative tissue deformation. In the following chapter, I will describe, in detail, these aspects of AR specific to nephrectomies.

5. Image registration

This is the process where a medical imaging is aligned with the patient’s anatomy to form a visually projected overlay. This can be done through various methods, but the fundamentals rely upon multiple data points being processed to align medical imaging with the best corresponding surface-landmarks on an organ/patient’s anatomy.

Image registration can be used in pre-operative planning stage and in the intra-operative stage. The planning stage involves using the combined imaging overlay (of CT scans and MRIs) onto the kidney to identify key structures of importance—hilar vasculature, the spatial attributes of the kidney and their relationship with the renal collecting system. This helps build a roadmap of what the surgery will involve and although it does not require accuracy to the millimetre (as is involved in the intra-operative phase) it allows a pretty good estimation to the planned steps in surgery.

The intra-operative stage requires higher precision image registration and more importantly, dynamic correspondence with the moving organ. This is to allow tumour resection margins that can be accurate to the millimetre.

Hughes-Hallet [18] classify image registration used by AR developers into 3 main subtypes: manual registration, surface-based registration and 3D registration (Figure 2).

Figure 2.

Different methods of image registration [18].

Manual registration is the simplest method where the surgeon uses their anatomical knowledge to align a projected imaging onto the organ. Examples of this method have involved: fusing 3D reconstructions with live operative view, projecting intra-abdominal anatomy onto the skin and colour coded projection to highlight “safe zones” of resection margins. These have offered quite a hands-on approach with relatively little planning compared to other registration methods—allowing a low barrier to entry, good anatomical orientation and relatively good awareness of disease-free parenchyma. Manual registration does not allow accurate estimates of tumour margins and is mostly limited to the planning phase.

Surface based registration method involves using a tracked instrument to build a topographical map of the internal organ. This has been most extensively used in robotic partial nephrectomy—an example being the da Vinci robot. The laparoscopic instrument tip of the robot can touch a point on the live kidney and this information along with the joint positions sense of the robotic arms can calculate a position in space to build a surface anatomy of the kidney. The surface anatomy can then be correlated with 3D reconstruction imaging of the patient and projected onto the surgeon’s operating view.

This method is a form of internal tracking relying on a good tracking instrument and accurate computer algorithms to provide the correct position of the organ being operated on. Errors in the estimated spatial position of the instrument tip can create unsafe image registration that would be inaccurate for tumour margins of partial nephrectomies.

This registration method is more accurate than manual registration as it provides automation and reduces the surgical workload. However, there are still areas for improvement such as better tracking methods. External tracking, where the organ is tracked from the outside is another method. Examples include optical, magnetic and laser but there are inherent drawbacks such as optical tracking requires a direct line of sight and magnetic tracking requiring physical placement of a tracer in the organ.

A recent advancement in surface-based registration was developed by Edgcumbe et al. where a miniature laser projector called ‘Pico lantern’ can be dropped into the patient’s abdomen. It can then be picked up by laparoscopic instruments and perform surface recognition on the abdominal organ. Using surface-based recognition, it can then project the pre-operative image on the surface of the organ. The projector is visually tracked in the laparoscopic field of view and has been tested in porcine kidneys and in detecting pulsatile motion in carotid arteries [19].

3D-CT stereoscopic image registration uses 2 cameras that focus on the same object in space and a combination of the perspectives from 2 separate viewpoints allows a spatial reference for the object. This system has been used for kidneys by isolating a point on the surface of the organ, which can then be aligned to the corresponding point on a 3D reconstruction image. Using this as the centre of rotation, the surgeon can manipulate the operative view for a better understanding of field of operation. This system has shown average target registration error of less than 1 mm and mean registration duration of 48.1 s i.e. more accurate resection margins, faster registration and reduced theatre time.

There is however, a lack of organ tracking in this system, meaning that each movement of the camera needs a new image registration. Furthermore, stereoscopic cameras are usually very expensive and require large handling computer systems, which is another limitation to adoption of this method.

6. Organ tracking

An ideal AR system would allow the organ to be tracked in real time as it is affected by respiration, tissue deformation and other complications like bleeding. The renal system is particularly vulnerable to these dynamic changes compared to other organs systems like the brain or bones where rigid image registration systems are the norm and do not require as much tracking. The registered image projected into the operative view should be locked and dynamically move with the organ and the laparoscopic camera.

There are many types of tracking studied in AR surgery. These have mainly included optical tracking where optical markers on the organ allow a measured position of the laparoscopic camera relative to the organ and move with the organ [20]. Infra-red tracking is another method which involves the use of infrared-emitting diode markers. The main issue with optical tracking is instruments obscuring the direct view of field (required for tracking) and a limited depth perception. Infra-red tracking has the issue of selecting the correct anatomical landmarks as markers—mismatches can occur due to deformations, compression and intraoperative haemorrhaging. Many studies have failed to achieve accurate registration of dynamic intra-abdominal organs with infra-red [21].

Electromagnetic tracking is another way of doing this—this has been explored by use of the wireless trackers an ex-vivo bovine partial nephrectomy model. This involved surrounding the tumour within the kidney with magnetic transponders which relayed back to the surgeon and in conjunction with optical camera tracking, a partial nephrectomy was performed [22]. There are limitations to this tracking method as magnetic fields can have interference from laparoscopic instruments and operating tables. The method also requires placement of the magnetic transponders into the target organ and it is currently hard to achieve an alignment error of less than 5 mm—a tumour margin error too great for accurate partial nephrectomies [23].

An alternative has been explored by Yip et al. where 3D stereoscopic image registration has been combined this with tracking algorithms—producing only 1.3–3.3 mm degree of error [24]. More recently, Edgcumbe et al. have developed a tracking device called the Dynamic Augmented Reality Tracker (DART). This is a 3D-printed stainless-steel tracker that can be anchored to a fixed position on the kidney relative to the tumour. This, with the help of an ultrasound transducer, can then be used to track the location of surgical instruments relative to the tumour in real time. This system has been named ARUNS (Augmented Reality Ultrasound Navigation System) and was used in the robotic-assisted excision of a phantom kidney tumour [25].

Vávra et al., in their recent review, comment that it may be possible to track organ movement without physical markers in the future. Some of these methods explored include algorithms to predict real-time movement of organs, physics-based deformation models, natural points of reference as tracking points and the use of red-green-blue cameras to perform image registration without markers. They also comment that whilst the average marker associated registration takes 8 min, a recent marker-less system only took about 5 min [20].

7. Tissue deformation

The kidney is a dynamic organ and renal surgery involves deformation of the anatomy. Every step in the operation changes the initially projected image registration [26] and for an ideal AR system, there need to be real time feedback for tracing this. An answer to this would be the development of computer algorithms to predict changes in anatomy at crucial steps such as clamping of the renal arteries and surgical dissection at tumour margins. Algorithms can also be developed to predict the effect of ongoing influences on the organ position such as respiratory patterns and peritoneal insufflation.

There have been some tissue deformation models considering renal clamping, incision and external pressure loads to the kidney like intra-operative insufflation. Some of these have shown an improvement of 29% in the registration error when compared to a non-deformation model. However, in these models, the kidney is assumed to have a linear elastic behaviour and the models have been based on ex-vivo kidneys. Another method has been developed taking the diaphragm motion and its influence on kidney motion into consideration—this has used preoperative CT scans during inspiration and expiration and computer errors have been shown to be less than 2 mm in predicting kidney positions [27].

Although there is a scarcity of studies in this area, one study showed that mathematical models were able to predict up to 52% of the operative deformation in porcine kidneys when compared to pre and post-op CT imaging [28].

Baumhauer et al. [29] have proposed a system to answer tissue deformation by insertion of custom-designed navigation aids into the kidney and using “inside-out tracking.” This is where the CT-scan can provide real-time spatial awareness by identifying the navigation aids and projecting imaging onto the laparoscopic view. Tested in a virtual environment, this system showed a visualisation error of 1.36 mm (adequately accurate). This system was replicated in the clinical setting by Teber et al. [30] where 10 patients had retroperitoneal laparoscopic partial nephrectomies. The results showed zero cases with a positive surgical margin, zero complication rate and zero conversion to open surgery. This system does however, require placement of aids (like 1.5 cm long needles) into the kidney and is dependent on at least 4 aids being present. This brings risks of damage to healthy parenchyma and aids being lost intra-op.

An answer to tissue deformation could lie closer to technology used within the commercial sector. Advanced facial recognition software used in simulating real time ‘Apple animojis’ [31] could be adapted to intra-operative kidneys. The software could delineate an optical real time map of the kidney which would change with the active deformation.

8. Live imaging

Live imaging is an answer to capturing tissue deformation as it allows real-time dynamic information on the kidney and removes the need to ‘estimate’ structural changes in the tissue. Ultrasound is one live imaging modality that has shown high sensitivity and specificity at identifying tumour margins [32, 33]. There have been several studies using live USS to aid AR. Kang et al. [34] merged live laparoscopic ultrasound images on stereoscopic video and showed accuracy of image-to-video correlation of up to 2.76 mm. Kang et al. claim this aids in depth perception and better visualisation of internal structures. Cheung et al. demonstrated that a fused video-USS model for phantom partial nephrectomy allowed for a 1.1 mm tumour resection margin (with 2D fusion) for endophytic tumours [35].

Singla et al. [36] showed in their study, that simulated healthy renal tissue excised was reduced from an average 30.6–17.5 cm3 using intra-operative USS based AR. This technique would be especially beneficial in critical structures like endophytic renal tumours where most of the tumour lies below the organ surface—(endophytic tumours currently have complication rate of nearly 50%).

These are all however preliminary studies and are based on phantom models which does not represent the true nature of the operation in vivo. A majority of studies have involved manual registration with labour intensive methods that are unrealistic to be currently used in-vivo. Cheung et al. found that although there was 29% reduction in planning time with the USS-fused model, the tumour required longer operative times (being up to 39% slower than the conventional system) [35].

Some projects have combined all three aspects of AR named above. An example of this is PARIS (projector-based AR intracorporeal system)—a method by Edgcumbe et al. [37] where there is a combination of a tracked projector, tracked marker and laparoscopic ultrasound transducer. This has been used in 16 simulated laparoscopic partial nephrectomies, where cancerous tumours were projected onto the kidney surface and this projection moved with kidney. An ultrasound allowed live imaging of the intra-operative environment. This study showed better identification of underlying anatomy and tumour boundaries to show signification reduction in healthy tissue excised.

9. Specific aspects of renal cancer surgery focused upon in AR development

There are a few aspects of renal cancer surgery (highlighted by Detmer et al. [27]), that AR development has specifically focused on. These include precise tumour resection and safe selective arterial clamping.

We have covered the issue of a precise tumour resection to preserve maximum healthy tissue throughout the chapter. However, Detmer et al. specifically mention some studies to tackle this very area. Ukimara and Gill [38] describe using different colours to signify increasing distance from the tumour and this is overlaid on the AR field of view. Another method uses contouring of the organ around the tumour margin to highlight the tumour itself. Uncertainty of the tumour margin has also been encoded by using different colours to signify certain and uncertain areas of the margin [27].

Renal artery clamping is a crucial procedural step as ischemia needs to be limited to tumour-specific parenchyma. This is done by identifying and clamping only the tumour specific arterial branches (usually tertiary or higher-order). This concept has been described as “zero-ischaemia” [39]. There have been some studies detecting renal vessels underneath the organ surface and several studies aiming to identify arterial branches for selective clamping with variable success. These have been used for pre-operative planning and some intraoperative guidance. Development has been mostly based in manual registration techniques with displays over the laparoscopic view or on a separate screen [27].

10. Current limitations

This chapter has described the different aspects of AR in renal cancer and areas that have seen progress. However, there are current limitations holding back the use of AR technology in the clinical set-up. Vavar et al. and Detmer et al. have highlighted some of these (as below):

  1. Pre-operative medical imaging: currently all reconstructed images need preparing in advance by powerful processing systems. Not only is this expense, it is time consuming. With technological advancement—real-time high-resolution medical imaging and 3D reconstructions could be the norm. This would be displayed in real time intraoperatively and would markedly reduce/eliminate pre-op times.

  2. Inattentional blindness: not seeing an unexpected object in the field of view. This has especially been an issue with 3D image registrations, where the surgeon does not register an object as they were not expecting it to be part of that procedural step. With the development of AR headsets, there will be more information displayed to the operating surgeon. This can be distracting and there needs to be a conscious effort towards only displaying vital information or switching between different sets of information. Further work needs to be done on reducing human factor issues and making the human-computer interface more ergonomic.

  3. Depth perception: whilst current image registration involves imaging modalities such as MRI and CTs, the 2D or 3D imaging does not allow a full understanding of intraabdominal environment. Minimally invasive surgery has especially deprived the depth perception aspect of the surgical experience. Vávra et al. [20] suggest depth-sensing cameras could aid AR in the future.

  4. Hardware capacities: current AR is limited by the hardware capacities and thus processing power of computer systems they are run on. Vizua is a company developing “cloudification” and “application roaming” where the AR applications and data can be remotely managed to get the highest computing power and access to large datasets. A platform such as this could be incorporated into AR systems and answer issues of latency in image registration and access to good quality imaging (live or pre-operative).

  5. Other issues mentioned include tissue deformation studies only focusing on one source of deformation, 3D imaging requiring better image registration and the issue of simulation sickness (whilst using heavy current AR headsets).

Regardless of what aspect of AR being explored, there is little quantitative data on in-vivo procedures. Only 20 studies were found by a recent review [27] where AR had been used in clinical practice, and only 9 studies had 10 or more patients in the study. There is a crucial need for clinical validity to show improved patient outcomes and safety from using AR in renal interventions.

11. Future ideas/solutions

As Hughes-Hallet mention, there is a “one-size-fits-all” approach in most AR developments. Single imaging and imaging registration modalities have been used in isolation for many systems. Every surgical case is different however, and a combination of different modalities can provide a more accurate answer. Below are some examples of adjuncts that could aid current AR:

  1. NIRF—near infrared fluorescence is a type of imaging where indocyanine green (ICG) dye is injected into the body and it can be used to illuminate intravascular renal parenchyma. This can allow the surgeon to detect blood vessels under the organ surface and detect tissue abnormalities. Although NIRF has not shown much promise in predicting malignancy in partial nephrectomies, it has shown a reduction in global renal ischaemia. NIRF has been used in robot-assisted surgery to achieve super-selective arterial clamping—avoiding main arterial clamping in 65% of patients in a recent study. Infusion of ICG dye pre and post arterial clamping ensured that there was selective ischaemia only to the tumour region and adequate renal perfusion was achieved post-clamp removal. This imaging could be used in conjunction with AR to further aid live visual feedback, organ tracking and have better post-op renal functioning [40].

  2. Imperial College London’s iKnife could be used in conjunction with AR [41]. This ‘Intelligent knife’ is a surgical scalpel that chemically tests the tissue it has contact with. It uses Rapid Evaporative Ionisation Mass Spectrometry (REIMS) for real time analysis of the aerosols created from diathermy of tissues. iKnife has been used in gynaecological tissues to distinguish between normal, borderline and malignant tissue [42] and this could be used in partial nephrectomies to give real time feedback for a precise tumour margin.

  3. AR headset—this is a device that is being engineered simultaneously in many major US hospitals. Dr. Varshey and Dr. Murthi, are developing one such headset with the engineering team at the “Augmentarium” (University of Maryland). They hope to develop a system where a headset such as the Microsoft HoloLens can be worn by the surgeon and real-time USS of the patient or vital signs and patient data can be overlaid on the felid of view. This would drastically reduce the number of displays a surgeon has to usually track during an operation. Used in conjunction with dynamic image projection, the AR headset would be a good answer to cover the abovementioned 3 aspects of AR in partial nephrectomies [43].

    The AR headset hopes to eliminate any obstructions in the surgeon’s view as compared to conventional methods. Furthermore, voice recognition and gesture recognition development would enable hands-free control of the device—which would allow the surgeon to interact with the AR whilst maintaining a sterile environment [20] (Figure 3).

  4. 3D printing—model replications of the patient’s kidney can be printed using pre-operative CT/MRIs and these can be used to perform simulated operations prior to placing a knife onto the patient’s skin. SIMPeds 3D Print at the Boston Children’s Hospital offers exactly this—rapid printing and prototyping for nearly any organ in the human body [44]. Examples of this have been used to replicate and operate on difficult paediatric brain tumours [45], facial reconstructions and orthopaedic surgeries amongst many others [46]. This has allowed surgeons to simulate a realistic assessment of the individual’s organ where it can be felt, touched and cut at precise margins. 3D printed surgical planning of partial nephrectomies has been explored by Zhang et al. [46] where face and content validity was obtained by 4 experienced laparoscopic urologists. A pilot study by Silberstein et al. [47] envisioned that 3D models could enhance the surgeon’s (and patient’s) understanding of the individual’s renal malignancy anatomy—this would be especially beneficial in difficulties such as anatomical anomalies and precise segmental artery clamping [48].

Figure 3.

AR of the future. AR involving additional input from live USS imaging, organ trackers and other vital observations and patient data all being fed into the AR headset—providing a hands free platform.

Extrapolating further from this idea, 3D printed kidneys could also allow remote surgical procedures. An idea in conjunction with this was put forward by Dr. Murthi at a recent VR and AR applications gathering at Newseum, Washington, D.C [8]. She foresaw VR and AR working together to support patient care. An initial AR on the patient including imaging and medical data would allow the clinician to assess the patient’s condition (in this case, anatomy) and a remote VR system could allow the clinician to see what the initial AR has shown and consequently advise and provide insight. An augmented nephrectomy system could benefit surgeons where instead of advising, they could operate remotely on replicated 3D models (representing in-vivo kidneys). Initial AR collected locally from the patient could be remotely projected as VR onto a 3D model and a surgeon could perform a partial nephrectomy which could be translated to the real kidney with the help of local robotic da Vinci machines. This model would allow constant feedback between the AR and VR systems and remote operations could be an answer to lack of surgical resources in healthcare deprived areas around the world.

12. Conclusion

Being the 7th most common cancer in the UK [49], renal cancer surgery can really benefit from this emerging symbiotic relationship between surgeon and AR. Since the first uses of AR for partial nephrectomy, there have been reported improvements in the familiarisation of the patient’s anatomy and practicality of AR [38]. However, as Bernhardt et al. discuss in a laparoscopic AR review, there is much advancement to be made in image registration, live tracking and depth perception. Detmer et al. in their review (of AR and VR technology in renal cancers) also highlight the need for resolution of human factors and the need for large scale clinical studies that are currently sparse. In conclusion, recent AR development has worked towards systems aiming to be on par with conventional navigational techniques. Whilst some technology is achieving this in isolation, there still lie barriers of validation, cost evaluation and practical application in the way. In summary, there is much to be achieved before AR systems are precise and safe enough to be integrated into regular clinical practise.

Download

chapter PDF

© 2018 The Author(s). Licensee IntechOpen. This chapter is distributed under the terms of the Creative Commons Attribution 3.0 License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

How to cite and reference

Link to this chapter Copy to clipboard

Cite this chapter Copy to clipboard

Keshav Shree Mudgal and Neelanjan Das (December 31st 2018). Augmented Reality in Kidney Cancer [Online First], IntechOpen, DOI: 10.5772/intechopen.81890. Available from:

chapter statistics

180total chapter downloads

More statistics for editors and authors

Login to your personal dashboard for more detailed statistics on your publications.

Access personal reporting

We are IntechOpen, the world's leading publisher of Open Access books. Built by scientists, for scientists. Our readership spans scientists, professors, researchers, librarians, and students, as well as business professionals. We share our knowledge and peer-reveiwed research papers with libraries, scientific and engineering societies, and also work with corporate R&D departments and government entities.

More About Us