Open access peer-reviewed chapter

Telerobotic 3D Articulated Arm-Assisted Surgery Tools with Augmented Reality for Surgery Training

Written By

Ahmad Hoirul Basori and Hani Moaiteq Abdullah AlJahdali

Submitted: 10 September 2018 Reviewed: 11 September 2018 Published: 05 November 2018

DOI: 10.5772/intechopen.81447

From the Edited Volume

Telehealth

Edited by Thomas F. Heston

Chapter metrics overview

1,176 Chapter Downloads

View Full Metrics

Abstract

In this research, human body will be marked and tracked using depth camera. The arm motion from the trainer will be sent through network and then mapped into 3D robotic arm in the destination server. The robotic arm will move according to the trainer. In the meantime, trainee will follow the movement and they can learn how to do particular tasks according to the trainer. The telerobotic-assisted surgery tools will give guidance how to slice or do simple surgery in several steps through the 3D medical images which are displayed in the human body. User will do training and selects some of the body parts and then analyzes it. The system provide specific task to be completed during training and measure how many tasks the user can accomplish during the surgical time. The telerobotic-assisted virtual surgery tools using augmented reality (AR) is expected to be used widely in medical education as an alternative system with low-cost solution.

Keywords

  • telerobotics
  • augmented reality
  • 3D medical images
  • robotic arm
  • virtual surgery

1. Introduction

The study of robot controlled to follow certain paths by considering the collision with surrounding object has been studied extensively for the past years. The robot is usually manipulated through a particular device. In order to adjust the robot movement, user should reprogram the microcontroller that is attached to the robot to do certain task.

Robotic arm is one of the robotic parts, which is widely used for several fields including in the medical rehabilitation to assist a disable people. Implementing the real 3D arm is quite costly; the 3D simulated articulated robotic arm is offering new ways of simulation. In addition, the gesture tracking also offers the natural ways of interaction by providing the real-time synchronization between human arm and 3D arm.

Kinect was originally invented as a device for game; however, it also can be used for other purposes such as helping the recovery process of patient who got stroke. The doctor can monitor the improvement of patient and trail their nerve movement. By investigating the nerve of a person’s skeletal connection, healing specialists will have competency to find zones of body parts that requires intense training. The response that person acquired throughout or subsequently through a rehabilitation period is still possible to be perfected to solve precise badly behaved zones for the patient’s movement. The Kinect has a possibility as rehabilitation tools in house. Rehabilitation in the house provides suppleness for the patient to do regular reiteration of therapy.

Furthermore, to motivate neuron recurrence inside human mind that manipulates body change, therapy should be repetitive for a period of time. Idea behind the interface is that new position commands for the robot will be derived from the depth map captured by the Kinect. The interface offers methods to start and stop the Kinect, translate points from the Kinect’s position to robot’s position, and retrieve the latest position calculated by the Kinect. The control algorithm which is able to find the next position of the robot and, so that he/she can get exactly the wanted functionality. The Kinect uses a few seconds to start and to calibrate itself to give accurate depth measurements, so it is recommended that the Kinect interface is initialized at the same time as the robot system is turned on.

Most of the devices that used in hospital are really costly, and the procedure can be so complicated. They require very strict permission with authorized personnel to run the device and the queue can be quite long. Furthermore, some researchers focus on studying the communication between patient and therapist by providing the remote access through network to do home-based therapy [1]. In addition, the training system for elderly is also being studied to provide better design for attractive training system, since it can be conducted in home personally [2].

Augmented reality is also widely used for helping people on finding the path/route for pilgrim by combining AR technology and GPS trackers [3, 4, 5, 6]. In addition, the telecommunication in medical field such as telemedicine also has strong correlation with brain computer interface and haptic technology that increases the realism of telemedicine technology. The virtual character that assists the user will do their best to imitate human behavior as well as their interaction among themselves that will strive collision response in virtual environment [7, 8, 9, 10, 11, 12, 13, 14, 15]. With advanced collision handling, it will also reflect the realism of interaction between user and virtual agent [15]. Besides, researcher also studied the hand and finger tracking to assist doctor on viewing and manipulating the 3D model of human body [16, 17]. While another research try to improve augmented or virtual reality technology by producing realistic facial expression in the conventional teaching system [18].

Advertisement

2. Research method

2.1. Research methodology

The process is initiated by placing the Kinect camera in front of the user and then adjusting the optimum distance for acquiring finest depth image. Subsequently, it will capture the human body, which is then transferred into depth image stream. The second phase is random decision forest algorithm application by choosing a set of threshold for segmentation. The threshold and attribute which have high-density information are chosen and then the process is repeated. The final phase is mapping between joint skeleton and robotic arms and then starting the simulation by controlling remotely.

The methodology of the project consists of three main processes as shown in Figure 1.

Figure 1.

Project methodology.

Figure 1 describes that the telerobotic arm was initiated by placing the depth camera in front of the user to capture the human body and produce a stream of depth image. The algorithm used here is random decision forest that can determine the arm position of human body in real time. The process will be repeated until the desired accuracy has been satisfied. There are two vectors that built joint direction. Three axes, which define the joint frame, are the local or joint axis, the normal axis, and the binormal axis as shown in Figure 2.

Figure 2.

Join frame diagram.

2.2. Result of software requirement

The proposed application provides some features for user to interact with the system that is classified as functional and nonfunctional requirement as listed below:

2.2.1. Functional requirement

2.2.1.1. Gesture tracking

Gesture recognition can be divided into four main phases: hand movement detection, classifying the gesture from a collection of images, pull out the characteristic and then distinguish the gesture. The hand motion is determined through color of the skin and movement analysis. The speed of hand motion is computed to interpret the gesture localization from repeated images. Therefore, by collecting the depth images and analyzing the images, it can stimulate meaningful gestures that contain a particular command.

2.2.1.2. Real-time feedback

The real-time feedback is provided with depth image stream that is send out by Kinect and then analyzed using random forest algorithm. The algorithm will produce a skeleton joint that will be mapped into 3D robotic arm remotely.

2.2.2. Nonfunctional requirement

2.2.2.1. Finger tracking

The finger tracking is not covered in this research due to its complexity and it requires very short distance during tracking process while gesture requires a longer distance.

2.2.2.2. Full body synchronization

The robotic arm does not need full body tracking and synchronization because the system will focus on the arm of human.

2.3. Use case diagram

Figure 3 shows the use case diagram of the system. There are three main use cases: Kinect sensor, 3D articulated arm, and 3D arm control.

Figure 3.

Use case diagram.

2.3.1. Actor description

The actor of the above system represents the patient or other users who interact with the arm robot system

2.3.2. Use case description

The Kinect sensor is a motion-sensing controller that is able to sense user arm in motion mode. The Kinect needs sensor calibration before it is used, and the Kinect is able to detect depth by using an IR camera. The 3D articulated arm communicates with Microsoft Robotic Studio, while 3D arm control with gesture tracking.

2.4. Analysis phase

This section describes the sequence of project phases such as sequence diagram, activity diagram, and architecture of the system.

2.4.1. Sequence diagram

Figure 4 shows the sequence process of the system that starts with simulation and calibration with Kinect sensor and then it is continued by reading the depth data of user arm. Microsoft Robotic Studio will render the 3D arm and is synchronized with interpreted command of 3D arm and then performs 3D arm control.

Figure 4.

Sequence diagram.

2.4.2. Activity diagram

Figures 5 and 6 show two activity diagrams: diagram (A) is started with simulation, adjusting distance, tracking, capturing human depth image, and then classifying gesture.

Figure 5.

Activity diagram (A).

Figure 6.

Activity diagram (B).

While the activity diagram in Figure 6 starts with gesture classification, joint classification, than joint synchronization between human joint with 3D arm joint, and provide real time interaction between 3D articulated arm.

2.5. Architecture design phase

Figure 7 depicts the architecture design phase that has Kinect sensor component, and the other component is Microsoft Robotic Simulator that has capability to connect with Kinect driver and Kinect software development kit (SDK). Microsoft Robotic Studio consists of visual programming language and 3D environment model.

Figure 7.

Architecture design phase diagram.

Advertisement

3. Result and discussion

The first testing shows how to manage the tracker for hand control; it is started by initiating the Kinect camera to capture the human joint and render 3D hand that will follow the mouse movement.

3.1. Hand tracking control

The hand tracking control is used for motion detection of human hand as shown in Figure 8. The system is to track and differentiate right hand or left hand of human. Figure 9 is the result of robotic arm rotation using Kinect gesture.

Figure 8.

Kinect depth camera tracking human hand.

Figure 9.

Robotics arm which is connected to Kinect in steady position with domino blocks.

Figures 10 and 11 also demonstrate the movement of joint of robotic arm using Kinect gesture. The robotic arm will move according to the joint data that are sent through network by following the trainer movement remotely.

Figure 10.

User controlling the robotic arm to hit the blocks of domino.

Figure 11.

Robotics arm in wireframe mode.

3.2. Performance testing

The performance of rendering is satisfying when the frame per second (FPS) is very high and the highest score reaches 60 while the lowest is 54, as shown in Figure 12.

Figure 12.

The statistical measurement of frame rate per second.

Figure 13 and Table 1 also portray the statistical measurement of the rendering process of 3D articulated robotic arm during the testing.

Figure 13.

The statistical measurement of frame over time.

Frames Time (ms) Min Max Avg
982 16,411 55 61 59.838
1935 33,119 5 61 58.426
99 1638 59 61 60.44

Table 1.

Frame, time, max, and average.

Advertisement

4. Conclusions

Telerobotic is one of the essential research topics, which is widely applied into medical rehabilitation and even manufacturing process in industry. This paper aims to provide telerobotic arm with six joints that represent human arm. This arm can be rotated and can act like human arm. The idea of the project is to provide training or exercise for poststroke patient to move their hand by controlling the 3D articulated arm. The human hand is tracked down by depth camera, and the behavior of real hand and 3D Arm is synchronized in real time. In this project, we provide five joints of 3D robotic arms that can be rotated according to its angle. The 3D arms will be simulated with domino effect when they collide with each other. The user will control the arm by performing a gesture in front of Kinect camera, and the synchronization of joints between human arm and 3D arm is performed in real time. The control process through Kinect by imitating mouse cursor movement also runs smoothly during the testing process. This finding is believed to bring potential benefits to rehabilitation for certain patients such as poststroke rehabilitation. The result is very convincing, while interaction is conducted naturally just by waving their hands or rotating the joints of our hand, 3D arm will do rotation as well. For future recommendation, the result of project can be improved further by conducting clinical test to real patient in medical rehabilitation and it can be used as a simulator in manufacturing process in industries.

Advertisement

Acknowledgments

This work was supported by the deanship of scientific research (DSR), King Abdulaziz University, and Jeddah Saudi Arabia. The authors, therefore, gratefully acknowledge the DSR for technical and financial support.

References

  1. 1. Anton D, Kurillo G, Goñi A, Illarramendi A, Bajcsy R. Real-time communication for Kinect-based telerehabilitation. Future Generation Computer Systems. 2017;75:72-81
  2. 2. Ofli F, Kurillo G, Obdržálek Š, Bajcsy R, Jimison HB, Pavel M. Design and evaluation of an interactive exercise coaching system for older adults: Lessons learned. IEEE Journal of Biomedical and Health Informatics. 2016;20(1):201-212
  3. 3. Afif FN, Basori AH, Saari N. Vision based tracking technology for augmented reality: A survey. International Journal of Interactive Digital Media. 2013;1(1):46-49
  4. 4. Afif FN, Basori AH, Almazyad AS, et al. Fast markerless tracking for augmented reality in planar environment. Procedia - Social and Behavioral Sciences. 2013;97(6):648-655
  5. 5. Basori AH, Afif FN. Orientation control for indoor virtual landmarks based on hybrid-based markerless augmented reality. 3D Research. 2015;6:41. DOI: 10.1007/s13319-015-0072-5
  6. 6. Albaqami NN, Allehaibi KH, Basori AH. Augmenting pilgrim experience and safety with geo-location way finding and mobile augmented reality. International Journal of Computer Science and Network Security. 2018;18(2):23-32
  7. 7. Basori AH et al. The feasibility of human haptic emotion as a feature to enhance interactivity and immersiveness on virtual reality game. In: Proceedings of the 7th ACM SIGGRAPH International Conference on Virtual-Reality Continuum and Its Applications in Industry. Singapore: ACM; 2008. pp. 1-2
  8. 8. Basori AH, Qasim AZ. Extreme expression of sweating in 3D virtual human. Computers in Human Behavior. 2014;35:307-314. DOI: 10.1016/j.chb.2014.03.013
  9. 9. Basori AH, Ahmed MA, PrabuwonoAS YA, Bramantoro A, Syamsuddin I, Alehaibi KH. Profound correlation of human and NAO-robot interaction through facial expression controlled by EEG sensor. International Journal of Advanced and Applied Science. 2018;5:8. DOI: 10.21833/ijaas.2018.08.013
  10. 10. Basori AH. Emotion walking for humanoid avatars using brain signals. International Journal of Advanced Robotic Systems; 10:1-11. DOI: 10.5772/54764. http://journals.sagepub.com/doi/abs/10.5772/54764
  11. 11. Ahmed MAK, Basori AH. The influence of beta signal toward emotion classification for facial expression control through EEG sensors. Procedia - Social and Behavioral Sciences. Elsevier; 2013. DOI: 10.1016/j.sbspro.2013.10.294
  12. 12. Basori AH, Bade A, Sunar MS, Daman D, Saari N, Salam MDH. An integration framework of haptic feedback to improve facial expression. International Journal of Innovative Computing, Information and Control (IJICIC). 2012;8(11):7829-7851
  13. 13. Basori AH, Afif FN, Almazyad AS, Abujabal HA, Rehman A, Alkawaz MH. Fast markerless tracking for augmented reality in planar environment. 3D Research. 2015, 2015;6(4):1-11. DOI: 10.1007/s13319-015-0072-5. Article 72
  14. 14. Basori AH, Daman D, Bade A, et al. The feasibility of human haptic emotion as a feature to enhance interactivity and immersiveness on virtual reality game. In: Proceedings of the 7th ACM SIGGRAPH International Conference on Virtual-Reality Continuum and Its Applications in Industry (VRCAI '08). New York, NY, USA: ACM. Article 37; 2008. 2 pages. DOI: 10.1145/1477862.1477910
  15. 15. Abdullasim N, Basori AH, Salam MD, Bade A. Velocity perception: Collision handling technique for agent avoidance behavior. Telkomnika. 2013;11(4):2264-2270
  16. 16. Yusoff MA, Basori AH, Mohamed F. Interactive hand and arm gesture control for 2D medical image and 3D volumetric medical visualization. Procedia - Social and Behavioral Sciences. 2013;97:723-729. Elsevier
  17. 17. Suroso MR, Basori AH, Mohamed F. Finger-based gestural interaction for exploration of 3D heart visualization. Procedia - Social and Behavioral Sciences. 2013;97:684-690. Elsevier
  18. 18. Basori AH, Tenriawaru A, Mansur ABF. Intelligent avatar on E-learning using facial expression and haptic, TELKOMNIKA (Telecommunication Computing Electronics and Control). 2011;9(1):115-124

Written By

Ahmad Hoirul Basori and Hani Moaiteq Abdullah AlJahdali

Submitted: 10 September 2018 Reviewed: 11 September 2018 Published: 05 November 2018