Open access peer-reviewed chapter

An Interactive VR System for Anatomy Training

Written By

Djamel Aouam, Nadia Zenati-Henda, Samir Benbelkacem and Chafiaa Hamitouche

Submitted: 28 October 2019 Reviewed: 25 January 2020 Published: 26 February 2020

DOI: 10.5772/intechopen.91358

From the Edited Volume

Mixed Reality and Three-Dimensional Computer Graphics

Edited by Branislav Sobota and Dragan Cvetković

Chapter metrics overview

1,046 Chapter Downloads

View Full Metrics

Abstract

In recent decades, virtual reality (VR) becomes a potential solution to enhance clinical medical (functional reeducation, training, etc.), especially with the growth evolution of technologies form both visualization (e.g., HoloLens, VR in Case, etc.) and 3D gestural interaction (Ray Casting, Free Hand, etc.) point of views. The 3D visualization of the human anatomy could be a serious asset for students in medicine. This new technology could provide a clear and realistic representation of the internal organs of the human body, without having to resort to surgery. 3D organs based-course supports visualization could be a useful tool for students, especially in their first graduate studies, to enhance their perception on human’s internal composition. This system is composed of two modules, 3D human’s anatomy visualization module and interaction module for organs manipulation. Finally, the system will be tested and evaluated with several subjects.

Keywords

  • anatomy training
  • virtual reality
  • 3D interaction
  • usability
  • 3D visualization

1. Introduction

Several applications of training are arranged in a virtual environment. In these simulated environments, users could be exposed to critical situations that affect directly their security and life. This case is highly common in combat strategy training and surgical operations. Thus, experts in medical field strongly recommend the use of simulators and virtual reality technology for training surgery for future doctors. This solution offers more efficiency for training. Realistic virtual environments have been realized in order to immerse the user in this environment so that he can simulate real situations that can be realized. Medicine today knows technical and technological progress. This evolution makes the training task so complex to be done by the trainer and to be explained for the students.

In recent decades, VR technologies offer a great opportunity in medical training for students and health learns. Also, the vulgarization of new technologies encourages medical learners to be familiar with VR.

In this paper, we address the issue of VR-based medical training. In particular, we focus on designing human body anatomy teaching system using VR. The purpose is to facilitate understanding complex anatomy course. The proposed system allows trainers as well as learns to interact with human organs in order to obtain further explanation about the selected organ in a 3D form. Our system provides a VR educational tool with several functionalities: (1) the first consists in displaying skeleton, organs and blood network, (2) the second provides a body part explanation selected by the learner using an interactive tool and (3) finally, our system allows decomposing a selected organ to see its internal details.

Through our VR teaching system, we can implement interactive medical training case studies. This is given in more details in this paper. In Section 2, we present the role of VR in medical teaching. Section 3 provides related work on medical learning domain. In Section 4, a conceptual model of our system is described. Section 5 presents the application usage scenario and the sequence of events. Finally, Sections 6 gives the different stages of asset modeling and their integration into the virtual environment by showing some results of the designed environment.

Advertisement

2. Contributions of virtual reality for medical training

Classical training methods do not always meet current educational objectives as the large amount of data to be transmitted to learners. At this time virtual reality (VR) can be an alternative to the data management problems encountered in previous approaches.

VR offers new solutions to all control and control simulation and communication problems. It is presented as an improvement of classical simulation techniques. The use of virtual reality for training has many advantages over training in real environments such as:

It allows us to carry out tasks without danger: to work in immersion we are in contact with virtual objects that do not risk to hurt us by manipulating several objects (objects of operating room such as the scalpel) [1].

The tolerance that means we have the permission to make and to make mistakes without the security being questioned because the errors are formative (ex: to make errors on a virtual body no risks if one made these errors on a human body who can put his life at risk). In the same context the VR allows us to realize scenarios with realistic sensations in order to put the learner in more realistic situations we can simulate more realistic environments and in rare conditions and impossible to realize them in reality.

Another positive point of the VR is the availability as it provides you with training that does not have time constraints or presential because training in VR does not require a specific time or attendance requirement in a room. Course you can take your course in any place at any desired time.

More to the advantages mentioned above is the cost and the occupied space because the use of the VR occupies a minimal space compared to a model or a classic skeleton, moreover the use of the same equipment for other modules and even of other training which makes training using VR less expensive compared to training using conventional teaching methods.

Immersion in a virtual world enriches learning and enriches this environment by integrating important sensory aspects in many contexts.

Advertisement

3. Related work

A lot of work has been done in the field of teaching with the help of the VR since the latter has had the attention of the researchers in order to facilitate the task to the trainers and help the future doctors to assimilate the course, thus there are those who have simply made a state of the art on both teaching methods and draw conclusions on the advantages of each and the disadvantages and others have had to propose applications in different medicinal specialties.

Khwanngern et al. [2] studied a rare case that consists of craniofacial disorder, seen as a rare case of this phenomenon that student doctors may never have a real case of the cutting process and make that in theory what is not beneficial for students, to remedy this have created an application that simulates the human skull in an operating room and thanks to the motion controllers can cut, drill and manipulate the latter. In this work, we focus on an important action during jaw surgery, the process of cutting the mandibular bone (lower jaw). The cut requires great precision and a small mistake can damage the facial nerve, which can lead to paralysis of the facial muscle. The proposed idea is interesting but it does not simulate reality because it simulates the skeleton of a skull but in reality we are confronted with a human body preferably to simulate a human body and proceed to cut, and second point we visualize the helmet controllers which may hinder us during the cut by a loss of precision.

Alfalah et al. [3] conducted a comparative study between traditional medical education methods and that of VR technology, the VR came to solve the shortcomings of the traditional method since the latter is a tool that offers additional means to teach in order to improve the quality of skills and to meet the requirements of modern medical training in order to overcome the difficulties encountered by students and teachers in conveying the message. A comparative study was conducted between the two teaching methods in VR and classical method. The experiment that has been conducted is to test students by offering them a questionnaire on both teaching methods and the VR offers the possibility to display the organ (human heart) in 3D and manipulate it. Does not exist in conventional method, dissect the 3D cardiac model in layers to clarify anatomical relations of different parts, explore the information on each component of the model, explore the features provided in the system.

Huang et al. [4] conducted a student acceptance study of new technologies such as VR. Have set up a learning application in VR by distributing a questionnaire to students when using the application in order to have their opinions, after such an experience have deduced that the percentage of acceptability is high. And these applications allow students to take their courses without being tied to the constraints of time or face to face, we can follow the course of the place we want and at any time.

De Faria et al. [5] seen to the methods used in the formation and teaching of the future doctors is really complex of the methods that are based on conferences or the dissertations in laboratory, to do have composed 3 working groups with different tasks and comparing their perception during different attempts using the two learning methods and comparing the statistics of these, the results reveal satisfactory results using the VR in the teaching that the students better assimilate the course to the VR assistance.

Izard et al. [6] have excellent 3D human body anatomical models, and many VR applications that have been designed based on DICOM and Asteion CT scan, by Toshiba Medical Systems, of Complejo Hospitalario. Universitario de Salamanca, following the study protocol of the skull: one in anteroposterior projection and one in lateral position. Applications allow users to move within different parts of the human body using the stereoscopic system and interact to make a decision about which information to display by conducting a study of potential and contribution teaching of VR in education.

Mathur [7] has studied a really important and important case by initiating each project that the cost, and most of the existing virtual reality systems are really expensive especially when it comes to specialized systems, to remedy it he proposed a system with an oculus helmet and razer controllers hydra a system.

Advertisement

4. Conceptual model

In our case we designed an application by providing interactive educational tools that offer the possibility of interaction in order to make decisions on the explanations that we want to display in the virtual environment, the whole experience is in 3D immersion and we provide interactive educational tools. In the environment we have an avatar of a human body or the user has the ability to manipulate this body using the touch of the HMD helmet. we provide him with a menu that he can configure using the touch or the latter can play on the rotation of the human body and play on the transparencies in order to separate the skeleton alone or just the organs, by targeting an organ or a bone you will have the definition of the latter which appears on a side panel and thanks With the laser keys you can scroll so that you can read the entire text.

Our work consists in the development of an orgVR application which is a teaching platform (application) dedicated to medical students but the advantage of the latter that the application can be used not only not for medical students but also people who are curious and want to deepen their knowledge in the field of human anatomy.

The desktop application orgVR is an educational application on the anatomy of the human being which allows a user equipped with the oculus rift helmet and touch controllers to interact with the human body, the possible interactions are numerous (X-ray, rotation, change of opacity, appearance of documentation, etc.) implemented thanks to c # and xml scripts and shaders that we have developed on organs that we have modeled.

The proposed environment is an immersive and interactive environment, or the user is endowed with an oculus rift helmet and touch or the latter will be immerse in a laboratory and thanks to the touch can use the interaction principles that are implemented (navigation, manipulation and selection).

  1. Navigation: is the ability to move in a virtual environment to make translations while being immersed, the proposed environment is an immersive environment which offers the possibility of navigating in this environment thanks to the translations and the movement sensors which translate between our position in the real world and the virtual world.

  2. Selection: the principle of selection is applied when you want to have the definition of an organ or other thanks to the laser of the oculus rift touch it is enough just to target (target) this organ or the part of the body you will have the definition of the latter which appears in pop-up window format on the side and you can scroll in order to be able to read the entire text, thus allowing us to select the different menu items.

  3. Manipulation: the application has a menu that carries several functions that allow you to manipulate the human body, this menu that allows you to manipulate the body and play on the transparency and rotation functions.

4.1 Conceptual diagram

The conceptual diagram represents the main modules of our application. Our product is made up of three main models (rendering module, interaction module and tracking module). Conceptual diagram describes the global functioning of our application. By dissecting this diagram we start with the database is an essential part where the different 3D assets and the scenes are stored in when the application is launched the game engine loads the assets and the environments from the database as well when using the application, we use this data depending on the event triggered (Figure 1).

Figure 1.

Conceptual diagram.

As we can see on the diagram a user equipped with an HMD helmet (oculus rift) and touch which is immersed in the virtual environment and performs movements this is where the tracking module plays an important role in retrieving the geometrical position of each movement and movement of the user in order to ensure navigation within the environment. The tracking module is composed of two parts seen we have two main objects to track down the HMD helmet and touch them. The aim is to recover different positions of the helmet, while being able to touch them in real time by sending information to the rendering module. The renderer part projects 3D scenes accordingly to the import information, which is collected by the tracking module and interprets each of performed movement (or event) to ensure effective immersion and navigation tasks, as well as interaction with the virtual environment.

4.1.1 Tracking module

Is responsible for retrieving the geometric position of the hands (touch), the head (HMD helmet) and translating the latter from the real world to the virtual world.

4.1.2 Interaction module

Retrieves the geometric data and interprets them in interactions following an event from a peripheral as in our case by targeting an organ with the touch with a click on the trigger an interaction that occurs.

4.1.3 Rendering module

Is the essential module in virtual reality given the visual context is important in immersive environments. This module allows synchronization of movements and scenes. In addition, this module allows you to load the necessary 3D models and other data such as organs and their definitions in pop-up loaded from the database.

The rendered model selects the appropriate 3D models and projects them through the lenses of the Head Mounted Display (HMD) helmet.

Advertisement

5. The orgVR usage scenario

By launching the application the first step is to wear the oculus rift helmet and put the oculus touch controllers and set up the two movement sensors in order to track our position once everything is in place the user is immersed in the first scene or window where he can choose the gender of the body to manipulate or study (male or female).

Once the choice is made, he will be teleported to another scene or will be immersed with the body he has chosen for study. Thanks to the side menu, manipulate this body and interact on it such as doing rotations in order to rotate the body on itself, playing on the level of opacity of the different layers of the body. By pointing the laser of the touch on an organ, the organ in question lights up if we wish to have a description or the role of the latter, just press the button of the keys with our pushes a window in pop-up format appears using the laser, you can scroll to be able to read the entire message.

If you want a more detailed study of an organ, you just have to aim it with the laser and press the trigger, you will be teleported to another room (scene) or you will be isolated with the selected organ. The same principle as the previous scene the presence of a side menu which allows you to make rotations for the organ or cut the latter, move the cutting plane, so that we can catch this organ with the virtual hands present thanks to the touch, play on opacity which means transparency, so that we can also reload the scene to start again or return to the first scene where there is the human body.

Advertisement

6. Methodology

In order to reach our objective we need two main parts or configurations the hard configuration consists of adequate material for the development and use of our application, and the software part which allows us to model our environment and the assets used.

To develop our environment we went through two main stages.

6.1 Preparation and integration models

This part is the longest and most complex part in the work that has been done; this work consists of developing assets. We were not satisfied with the assets that exist on the net; we wanted to offer our own assets with a more realistic and detailed touch. To do this we had to use several modeling utilities.

6.1.1 Modeling (sculpting)

This is the most delicate step of the implementation because the objective was to create realistic models to improve the realism of the models and the user immersion for this we used the Blender software. The models were sculpted in highpoly (large number of polygons) (Figures 2 and 3).

Figure 2.

Heart modeling.

Figure 3.

Highpoly model.

6.1.2 Rotopology

It is strongly advised not to use highpoly models in 3D scenes because the very high number of polygons negatively affects the performance that is why 3D modelers use a technique called retopology, this modeling technique will use an existing 3D model to redo (better) its topology. This will consist of coming to magnetize on existing surfaces, new points, edges and faces. Thus, the extrusions and transformations of the new topology will perfectly follow the faces of the model object, after the application of this technique the model will have a lower number of polygons that will improve performance without reducing the details on the models. We used instant mesh for this step (Figures 4 and 5).

Figure 4.

Retopology stage.

Figure 5.

Lowpoly model.

6.1.3 UV mapping and UV unwrapping

A UV map is the flat representation of the surface of a 3D model used to easily wrap textures. The process of creating a UV card is called UV unpacking.

Once the polygonal mesh has been created, the next step is to “decompress” it in a UV map. Now, to give life to the mesh and to give it a more realistic aspect However, there is no 3D texture, because they are always based on a 2D image. This is where UV mapping comes in, because it is the process of converting your 3D mesh into 2D information so that a 2D texture can be wrapped around it (Figure 6).

Figure 6.

UV mapping stage.

6.1.4 Texturing and painting

In this last step, we apply a texture on our 3D models with Substance Painter. A texture is an image representing a surface offering the possibility of simulating the appearance of this when we paste it on a 3D object. Textures are particularly used generally in video games, and textures offer an aspect close to reality. After this step the models are ready to be exported to unity (Figure 7).

Figure 7.

Texturing stage.

6.2 Results and test

After the design of our assets, the role came to integrate them into our virtual environment. The biggest challenge after the integration of assets is the implementation of interaction methods in order to offer better visualization and perception to the user. Within our virtual environment we have implemented and used several methods of interaction with objects, among these shader method or raycasting to enrich the functionality of our environment for better interaction more detailed because the functionality of shader allows us to perform (cuts, grabbing, etc.).

6.2.1 Pointing laser and interaction

The biggest difficulty we encountered was to implement the pointing on the organs while managing the interaction with the UI menus, this pushed us to take a very particular interest in the system of ray jets (raycasting) of unity in effect in a 3D scene in virtual reality the user cannot use the mouse and that is why we have implemented a laser play the role of a pointer allowing interactions with the virtual environment. As the following figure shows us the result obtained by implementing the two raycasting scripts so that our laser is operational (Figure 8).

Figure 8.

Interaction laser.

6.2.2 Shader managing the object cut

The cup was the most important interaction we wanted to achieve, it was also the most difficult to achieve here we explain as we implemented it.

First of all we had to define how we wanted the user to carry out the cut from a methodical point of view (by what he will apply the cut) so we deemed it more realistic and interactive that the user can do the cut by passing a glass plane through the body, the latter can either be manipulated by catching it directly with the hand or by means of sliders on the UI menu. There the cutting effect is implemented thanks to a “OnePlaneBSP” shader which is applied to the organ, this shader calculates the position of the vertexes of the organ by adding to that of the cutting plane to identify the vertexes of the organs which are at—above the plan to prevent them from being drawn on the screen (Figure 9).

Figure 9.

Cut from the heart by the plane.

6.2.3 Shader managing opacity and X-ray effect

The change of opacity and the X-ray effect are two purely visual effects, we implemented the first thanks to the shader “TransparentDiffuse ZWrite” which is a simple transparent shader and to the script “Opacity Controller” which applies the effect on the organ. The second effect is achieved with a simple “XrayEffect” shader which is a simple transparency shader but which applies a white color to all the vertices to give an X-ray effect (Figure 10).

Figure 10.

Transparency effects.

6.2.4 Object grabbing implementation

In this last part we will explain how we implemented grabbing objects. This interaction was very easily implemented using the oculus integration kit in unity, we added hands to the user avatar and a simple collision circle around the objects made it possible to catch objects intuitively as in the real environment (Figures 11 and 12).

Figure 11.

Heart grabbing.

Figure 12.

A student using the application.

Advertisement

7. Conclusion

We managed, through this work, to create a medical educational training application in virtual reality which works under Windows and which allows users to visualize the human anatomy and interact with organs in numerous ways (section, X-ray, grabbing, rotation, etc.).

To design an effective virtual environment, usable and useful for training, we proposed a design methodology taking into account the educational objectives and technological capabilities.

This methodology allowed us to specify an optimal EV for the educational training of young medical students in anatomy. This EV does not reproduce reality as perfectly as possible. Indeed, realism is not the most effective solution in all cases of training. Sometimes, it is interesting to walk away from it to show hidden realities or to understand abstract concepts. The many potentials of virtual reality allow us to offer different levels of realism and abstraction.

In the proposed environment, the oculus rift virtual reality headset was used as hardware, a headset that allows total immersion in a virtual environment and ensures navigation and interaction in the latter. The choice of equipment is based on the availability of the helmet in our laboratory because this choice is not the ideal choice.

Because the use of the oculus rift headset is not the best solution seen after the use of the latter we noted shortcomings such as the problem of latency and the feeling of dizziness for use for a certain duration and since the proposed application is an educational application the student or learner is called to use the application for long periods of time plus the reduced quality of the image quality.

To remedy this kind of problem HTC Vive pro a headset which offers a better resolution of rendering (display) and reduce the problem of latencies observed within the oculus rift and also it has remedied the major problem of vertigo especially for first use of the headset or long-term use suffered by users of other virtual reality headsets.

But we see that the two types of helmets suffer from a common problem which consists in freedom of movement and risk of accidents since they are more wired helmets in order to be able to use these two helmets. PC gamer with graphics power which is really expensive and users cannot afford such hardware due to the excessive prices of these machines. In order for these applications to be used by the general public, the oculus firm has launched the oculus quest headset, which is an autonomous wireless headset which does not require a PC. The latter offers similar functionality to previous headsets with a lower price and a degree of freedom. Higher compared to the oculus rift and HTC Vive. Certainly the oculus quest has solved some shortcomings encountered on previous headsets also does not escape shortcomings despite these advantages. The quest’s major problems is the reduced quality of the rendering compared to the rift headset and the HTC vivid second point is the storage space the latter does not allow to make an extension.

The last type of helmet that VR helmets for smart mobiles is the most popular among the general public. This model is ideal in terms of production cost and deployment seen is within the reach of the large number of users except that the latter is really limited in terms of interaction because this solution just allows visualization and offers no alternative in order to manipulate 3D objects. In addition, the smart mobile must be equipped with adequate sensors (gyroscope, accelerometer, etc.) in order to guarantee navigation in the 3D environment.

The developed environment is an application that runs in local host by way of perspectives. Several improvements are envisaged for the developed system:

  • Extend the application to support collaborative work. Access of several users at the same time or on time to the same shared environment by implementing network methods in local network or remote network.

  • Implementation of the application on several interoperable virtual platforms.

References

  1. 1. Schild J, Misztal S, Roth B, et al. Applying multi-user virtual reality to collaborative medical training. In: 2018 IEEE Conference on Virtual Reality and 3D User Interfaces (VR). Reutlingen, Germany: IEEE; 2018. pp. 775-776. Available from: https://ieeexplore.ieee.org/abstract/document/8446160
  2. 2. Khwanngern K, Tiangtae N, Natwichai J, et al. Jaw surgery simulation in virtual reality for medical training. In: International Conference on Network-Based Information Systems. Cham: Springer; 2019. pp. 475-483. Available from: https://link.springer.com/chapter/10.1007/978-3-030-29029-0_45
  3. 3. Alfalah SFM et al. A comparative study between a virtual reality heart anatomy system and traditional medical teaching modalities. Virtual Reality. 2019;23(3):229-234. DOI: 10.1007/s10055-018-0359-y. Available from: https://link.springer.com/article/10.1007/s10055-018-0359-y
  4. 4. Huang H-M, Liaw S-S, Lai C-M. Exploring learner acceptance of the use of virtual reality in medical education: A case study of desktop and projection-based display systems. Interactive Learning Environments. 2016;24(1):3-19. DOI: 10.1080/10494820.2013.817436. Available from: https://www.tandfonline.com/doi/abs/10.1080/10494820.2013.817436
  5. 5. De Faria JWV, Teixeira MJ, de Moura Sousa Júnior L, et al. Virtual and stereoscopic anatomy: When virtual reality meets medical education. Journal of Neurosurgery. 2016;125(5):1105-1111. DOI: 10.3171/2015.8.JNS141563. Available from: https://thejns.org/view/journals/j-neurosurg/125/5/article-p1105.xml
  6. 6. Izard SG, Méndez JAJ, Palomera PR. Virtual reality educational tool for human anatomy. Journal of Medical Systems. 2017;41(5):76. DOI: 10.1007/s10916-017-0723-6. Available from: https://link.springer.com/article/10.1007/s10916-017-0723-6
  7. 7. Mathur AS. Low cost virtual reality for medical training. In: 2015 IEEE Virtual Reality (VR). Arles, France: IEEE; 2015. pp. 345-346. Available from: https://ieeexplore.ieee.org/abstract/document/7223437

Written By

Djamel Aouam, Nadia Zenati-Henda, Samir Benbelkacem and Chafiaa Hamitouche

Submitted: 28 October 2019 Reviewed: 25 January 2020 Published: 26 February 2020