Open access peer-reviewed chapter

Auto-tracking camera for dry-box laparoscopic training

Written By

Masakazu Sato, Minako Koizumi, Kei Inaba, Yu Takahashi, Natsuki Nagashima, Hiroshi Ki, Nao Itaoka, Chiharu Ueshima, Maki Nakata and Yoko Hasumi

Submitted: 22 February 2018 Reviewed: 22 March 2018 Published: 05 November 2018

DOI: 10.5772/intechopen.76716

From the Edited Volume

New Horizons in Laparoscopic Surgery

Edited by Murat Ferhat Ferhatoglu

Chapter metrics overview

942 Chapter Downloads

View Full Metrics

Abstract

While laparoscopic surgery is less invasive than open surgery and is now common in various medical fields, laparoscopic surgery often requires more time for the operator to achieve mastery. Dry box training is one of the most important methods for developing laparoscopic skill. However, the camera is usually fixed to a particular point, which is different from practical surgery, during which the operational field is constantly adjusted by an assistant. Therefore, we introduced a camera for dry box training that can be moved by surgeons as desired by using computer vision. By detecting the ArUco marker, the camera attached onto the servomotor successfully tracked the forceps automatically. This system could easily be modified and become operable by a foot switch or voice, and collaborations between surgeons and medical engineers are expected.

Keywords

  • computer simulation
  • endoscopic surgery
  • robotic surgical procedures
  • computer vision
  • augmented reality

1. Introduction

Laparoscopic surgery is one of the surgical approaches, and the proportion is increasingly compared to conventional open surgery [1]. In laparoscopic surgery, surgeons use a few and small incisions (5–10 mm in general) to enter into the abdomen. Through the incision, surgeons insert specialized instruments and camera and perform surgery inside the abdomen. While laparoscopic surgery is less invasive than conventional open surgery and is now common in various medical fields, laparoscopic surgery often requires more time for the operator to achieve mastery [2]. This is because the field of laparoscopic surgery is two-dimensional through a camera, and it takes a lot of time for surgeons who have often performed conventional open surgery to get accustomed to the camera view [2]. Thus, surgeons must practice extensively before performing practical laparoscopic surgery. Dry box training is one of the most important methods for developing laparoscopic skill [3, 4]. Dry box training consists of real instruments (such as forceps, scissors, needles, etc.) inserted into a box with a camera [5]. In general, organ models made from rubber or silicon are placed inside the box, and the surgeons practice how to use forceps, suturing, and other basic skills for laparoscopic surgery. The advantage of dry box training is the low cost. On the other hand, the camera is usually fixed to a particular point, which is different from practical surgery, during which the operational field is constantly adjusted by an assistant [5, 6]. Thus, surgeons need to manually change the camera angle to focus on where they are manipulating during training. This requirement leads to inefficiency because surgeons must pause the procedure every time they move the camera. Additionally, this inefficiency may prevent surgeons from practicing with the dry box training because some are annoyed by the interruptions. Some researchers elegantly investigated controlling camera by computer vision, but it would be expensive to introduce and not be suitable for the dry box training [7].

Therefore, we introduced a reasonable camera for dry box training that can be moved by surgeons as desired by using computer vision. In recent years, deep learning has been widely used for visual tracking in the field of computer vision analysis [8]. On the other hand, operation is now exclusively performed by single surgeon, which is called solo surgery [9]. Thus, the system which enables surgeons to be free from manipulating various instruments and to focus on surgical maneuvering will be more and more expected. In that context, artificial intelligence or deep learning has a great possibility to realize it.

This chapter is organized as follows. In Section 2, we introduce our applied methodology of autotracking camera. Section 3 deals with the results we obtained from the instrument. Section 4 covers the possibilities of computer vision in the field of surgery.

Advertisement

2. Methods and materials

2.1. ArUco marker

We used the OpenCV2 library for ArUco marker detection (https://opencv.org) [10, 11]. Briefly, OpenCV (Open Source Computer Vision Library) was designed for computational efficiency with a strong focus on real-time applications. Providers say that “An ArUco marker is a synthetic square marker composed by a wide black border and an inner binary matrix which determines its identifier (id). The black border facilitates its fast detection in the image and the binary codification allows its identification and the application of error detection and correction techniques. The marker size determines the size of the internal matrix. For instance, a marker size of 4 × 4 is composed by 16 bits” (https://docs.opencv.org/3.1.0/d5/dae/tutorial_aruco_detection.html). We referred to the code in the official tutorial. We printed an ArUco marker and attached it to the needle holder. We used the ArUco marker because of its robustness. As we recently demonstrated, the marker can be detected at a size as small as the forceps [12]. This means that their applications in the dry box training may be extended to clinical practice.

2.2. Servomotor

A servomotor is required to move the camera. In this study, we used the Pan-Tilt HAT (Pimoroni, Sheffield, United Kingdom) as the servomotor, as shown in Figure 1 (https://shop.pimoroni.com/products/pan-tilt-hat). We rotated the servomotor to the degree at which the ArUco marker was detected. The brief rationale is as follows: first, Raspberry Pi (a small computer) obtained consecutive images through the camera and then calculated the distance from the center of the current frame image to the center of the ArUco marker when it was recognized. The distance was expressed similar to a vector: x-axis (horizontal pixel) or y-axis (vertical pixel). Then, the angles were calculated, and the servomotor was rotated according to those calculations (x-axis for a pan rotation and y-axis for a tilt rotation) [13].

Figure 1.

Raspberry Pi and Pan-Tilt HAT. A small computer, Raspberry Pi, and a servomotor, Pan-Tilt HAT, were connected, and the ordinal web camera was attached on top.

We referred to the code of FaceTracker in the official tutorial (https://github.com/pimoroni/PanTiltFacetracker). Our modified code is shown in SI Text 1.

2.3. Development environment

The development platform used in this study was Raspberry Pi 3, model B (https://www.raspberrypi.org) [12]. Raspberry Pi and Pan-Tilt HAT are shown in Figure 1.

Advertisement

3. Results

3.1. ArUco marker detection and autotracking

This investigation consists of two parts as described in Section 2. The first objective was to detect the ArUco marker, and the other objective was to track the marker by using the servomotor. As shown in Figure 2 and Video 1, we succeeded in detecting and tracking the marker.

Figure 2.

Autotracking model. The detected ArUco marker was automatically tracked by using the servomotor.

The ArUco marker sometimes failed to be detected, but this occurred when the marker was too far from the camera. In addition, we thought that it was not problematic as long as surgeons used the forceps for training, as the marker would typically be within the detectable range.

As shown in Video 1, the speed and smoothness of the rotation from the Pan-Tilt HAT were satisfactory. Interestingly, we felt that the Pan-Tilt HAT might be better at performing a pan rotation than a tilt rotation. This might be due to using a non-expensive machine; however, we still consider our system to be suitable for dry box training because of its low cost and simplicity to create.

Advertisement

4. Discussion

Herein, we introduced a camera that can be moved by surgeons as they wish. The idea of cooperative human-machine coexistence has been the most researcher’s focus [14]. And, in that context, we are now interested in investigating cooperative system for surgery.

While dry box training is important for laparoscopy training, it differs from practical surgery in that the camera is usually fixed and does not move [3, 4]. Thus, surgeons need to manually change the camera angle every time they would like to focus on where they are suturing, thus producing unwanted inefficiency in training. Therefore, we introduced a camera that can automatically track the forceps to which an ArUco marker has been attached.

We propose two future directions. One research direction is the method for marker detection. We detected the ArUco marker in this study; however, this marker may not always be necessary if the energy device or forceps can be detected by using computer vision [6]. Indeed, recent studies have investigated using an autotracking camera to track the device [15, 16, 17, 18]. We consider these studies to be very important and may be essential for future robotic surgery. The advantage of using the ArUco marker is its robustness. As we previously described, we succeeded in detecting the ArUco marker at a size as small as the forceps [12]. Thus, we imagine that the marker may be used by embedding or depicting it directly on the forceps. Another advantage of using the ArUco marker is that we can use various markers at a time. The ArUco marker can be recognized at the same time as many as 1024 markers. This means that the marker can be assigned to all the instruments during laparoscopic surgery. This might be applied to the handling and cooperation of instruments by robots or artificial intelligence in the future. The other research direction is where and how to use the camera system. Although we showed an autotracking model in the present study, we have also investigated a tracking model that reacts only when the surgeon presses a certain key (e.g., the “s” key; SI Text 2). This means that the system can be modified and become operable by a foot switch or voice [19, 20]. Indeed, a robotic arm that can be operated and moved by a foot switch or voice is now commercially available [19, 21, 22, 23]. In addition, we believe that more sophisticated products can be developed by combining these ideas and systems.

Advertisement

5. Conclusions

We introduced a camera that can be moved by surgeons as they wish. We conclude that this investigation could easily be applied to a practical camera and robotic arm. In addition, collaborations between surgeons and medical engineers are expected.

Advertisement

Acknowledgments

This chapter has not been published elsewhere and is not being considered for publication in other journals.

Advertisement

Conflicts of interest

Authors declare that there are no conflicts of interest.

Advertisement

Supporting information

SI Text 1. ArUco marker tracker.

The code consists of two parts. The first part was used to detect the ArUco marker, and the other part was used to track the marker by using the servomotor.

SI Text 2 Modified ArUco marker tracker.

The model shown in SI was modified so that the camera reacted only when the surgeon pressed a certain key (e.g., the “s” key).

Video 1. ArUco marker detection and autotracking (https://www.youtube.com/watch?v=PIZod4SALsY%27%27).

The detected ArUco marker was automatically tracked by using the servomotor.

References

  1. 1. Moawad G, Liu E, Song C, Fu AZ. Movement to outpatient hysterectomy for benign indications in the United States, 2008-2014. PLoS One. 2017;12(11):e0188812
  2. 2. Jaffe TA, Hasday SJ, Knol M, Pradarelli J, Quamme SRP, Greenberg CC, et al. Safety considerations in learning new procedures: a survey of surgeons. The Journal of Surgical Research. 2017;218:361-366
  3. 3. Chandrasekera SK, Donohue JF, Orley D, Barber NJ, Shah N, Bishai PM, et al. Basic laparoscopic surgical training: examination of a low-cost alternative. European Urology. 2006;50(6):1285-1290 90-1
  4. 4. Hinata N, Iwamoto H, Morizane S, Hikita K, Yao A, Muraoka K, et al. Dry box training with three-dimensional vision for the assistant surgeon in robot-assisted urological surgery. International Journal of Urology: Official Journal of the Japanese Urological Association. 2013;20(10):1037-1041
  5. 5. Torricelli FC, Barbosa JA, Marchini GS. Impact of laparoscopic surgery training laboratory on surgeon's performance. World Journal of Gastrointestinal Surgery. 2016;8(11):735-743
  6. 6. Huri E, Ezer M, Chan E. The novel laparoscopic training 3D model in urology with surgical anatomic remarks: Fresh-frozen cadaveric tissue. Turkish Journal of Urology. 2016;42(4):224-229
  7. 7. Wijsman PJM, Broeders I, Brenkman HJ, Szold A, Forgione A, Schreuder HWR, et al. First experience with the autolap system: an image-based robotic camera steering device. Surgical Endoscopy. 2017
  8. 8. Feng X, Mei W, Hu D. A review of visual tracking with deep learning. In: 2nd International Conference on Artificial Intelligence and Industrial Engineering (AIIE2016); 2016
  9. 9. Yang YS, Kim SH, Jin CH, Oh KY, Hur MH, Kim SY, Yim HS. Solo surgeon single-port laparoscopic surgery with a homemade laparoscope-anchored instrument system in benign gynecologic diseases. Journal of Minimally Invasive Gynecology. 2014;21(4):695-701
  10. 10. Shao P, Ding H, Wang J, Liu P, Ling Q, Chen J, et al. Designing a wearable navigation system for image-guided cancer resection surgery. Annals of Biomedical Engineering. 2014;42(11):2228-2237
  11. 11. Zemirline A, Agnus V, Soler L, Mathoulin CL, Obdeijn M, Liverneaux PA. Augmented reality-based navigation system for wrist arthroscopy: Feasibility. Journal of Wrist Surgery. 2013;2(4):294-298
  12. 12. Sato M, Koizumi M, Hino T, Takahashi Y, Nagashima N, Itaoka N, et al. Exploration of assistive technology for uniform laparoscopic surgery. Asian Journal of Endoscopic Surgery. 2018
  13. 13. Wei CC, Song YC, Chang CC, Lin CB. Design of a solar tracking system using the brightest region in the sky image sensor. Sensors (Basel, Switzerland). 2016;16(12)
  14. 14. Hamid OH, Smith NL, Barzanji A. Automation, per se, is not job elimination: How artificial intelligence forwards cooperative human-machine coexistence. In: 2017 15th International Conference on Industrial Informatics (INDIN); Germany: EmdenLeer; 2017. pp. 899-904
  15. 15. Navarro AA, Hernansanz A, Villarraga EA, Giralt X, Aranda J. Enhancing perception in minimally invasive robotic surgery through self-calibration of surgical instruments. In: Conference Proceedings: Annual International Conference of the IEEE Engineering in Medicine and Biology Society IEEE Engineering in Medicine and Biology Society Annual Conference; 2007; 2007. pp. 457-60
  16. 16. Toti G, Garbey M, Sherman V, Bass BL, Dunkin BJ. A smart trocar for automatic tool recognition in laparoscopic surgery. Surgical Innovation. 2015;22(1):77-82
  17. 17. Voros S, Long JA, Cinquin P. Automatic localization of laparoscopic instruments for the visual servoing of an endoscopic camera holder. In: Medical Image Computing and Computer-Assisted Intervention: MICCAI International Conference on Medical Image Computing and Computer-Assisted Intervention; 2006. 9(Pt 1):pp. 535-42
  18. 18. Zhao Z. Real-time 3D visual tracking of laparoscopic instruments for robotized endoscope holder. Bio-Medical Materials and Engineering. 2014;24(6):2665-2672
  19. 19. Mewes A, Hensen B, Wacker F, Hansen C. Touchless interaction with software in interventional radiology and surgery: A systematic literature review. International Journal of Computer Assisted Radiology and Surgery. 2017;12(2):291-305
  20. 20. Petroni G, Niccolini M, Caccavaro S, Quaglia C, Menciassi A, Schostek S, et al. A novel robotic system for single-port laparoscopic surgery: Preliminary experience. Surgical Endoscopy. 2013;27(6):1932-1937
  21. 21. Dikici S, Aldemir Dikici B, Eser H, Gezgin E, Baser O, Sahin S, et al. Development of a 2-dof uterine manipulator with LED illumination system as a new transvaginal uterus amputation device for gynecological surgeries. Minimally Invasive Therapy & Allied Technologies: MITAT: Official Journal of the Society for Minimally Invasive Therapy. 2017:1-9
  22. 22. Gutierrez MM, Pedroso JD, Volker KW, Howard DL, McCarus SD. The McCarus-Volker ForniSee(R): A novel trans-illuminating colpotomy device and uterine manipulator for use in conventional and robotic-assisted laparoscopic hysterectomy. Surgical Technology International. 2017;30:191-196
  23. 23. Maheshwari M, Ind T. Concurrent use of a robotic uterine manipulator and a robotic laparoscope holder to achieve assistant-less solo laparoscopy: The double ViKY. Journal of Robotic Surgery. 2015;9(3):211-213

Written By

Masakazu Sato, Minako Koizumi, Kei Inaba, Yu Takahashi, Natsuki Nagashima, Hiroshi Ki, Nao Itaoka, Chiharu Ueshima, Maki Nakata and Yoko Hasumi

Submitted: 22 February 2018 Reviewed: 22 March 2018 Published: 05 November 2018