Open access peer-reviewed chapter

A Proposal of Haptic Technology to be Used in Medical Simulation

Written By

Pablo Sánchez-Sánchez, José Daniel Castro-Díaz, Alejandro Gutiérrez-Giles and Javier Pliego-Jiménez

Reviewed: 06 January 2022 Published: 06 March 2022

DOI: 10.5772/intechopen.102508

From the Edited Volume

Haptic Technology - Intelligent Approach to Future Man-Machine Interaction

Edited by Ahmad Hoirul Basori, Sharaf J. Malebary and Omar M. Barukab

Chapter metrics overview

221 Chapter Downloads

View Full Metrics

Abstract

For medical training aims, tele-operation systems have inspired virtual reality systems. Since force sensors placed on the robotic arms provide interaction force information that is transmitted to the human operator, such force produces a tactile sensation that allows feeling some remote or virtual environment properties. However, in the last two decades, researchers have focused on visually simulating the virtual environments present in a surgical environment. This implies that methods that cannot reproduce some characteristics of virtual surfaces, such as the case of penetrable objects, generate the force response. To solve this problem, we study a virtual reality system with haptic feedback using a tele-operation approach. By defining the operator-manipulated interface as the master robot and the virtual environment as the slave robot, we have, by addressing the virtual environment as a restricted motion problem, the force response. Therefore, we implement a control algorithm, based on a tele-operation system, to feedback the corresponding force to the operator. We achieve this through the design of a virtual environment using the dynamic model of the robot in contact with holonomic and non-holonomic constraints. In addition, according to the medical training simulator, before contact, there is always a free movement stage.

Keywords

  • Haptics
  • virtual reality system
  • bilateral teleoperation
  • holonomic constraint
  • non-holonomic constraint
  • position-force control

1. Introduction

Teleoperation and virtual reality systems are intrinsically related since they make a human operator interact with the environments without being in physical contact with them. In the first one, these environments are real, while in the second one, we generate them in a computer simulation. However, both types of systems must make the operator perceive, as realistically as possible, the characteristics of the remote or virtual environment. Some variables used to reproduce these characteristics are position and force, which provide visual and haptic feedback, respectively. The biggest difference between them is the procedure of generating the information received by the operator. In the teleoperation case, both signals exist physically and are transmitted via a control algorithm. The algorithm receives both signals from sensors. Both signals do not exist in virtual reality and we must generate computationally them.

Stimulating the senses of sight and touch, as precisely as possible, is essential during the interaction process, since they are the principal channels with which the operator perceives the world around them. For teleoperation, in the visual case, the communication comes directly from the environment or, if the operator is in a remote place, using a camera and a computer screen. The virtual reality system generates the environment through a digital simulation, and the operator receives the visual information through a screen. We need additional devices since the tactile issue is more complex. Those devices must be capable of transmitting the generated forces in the environment. Such a process implies including haptic robots in the systems because of their capability to generate forces and torques that the human operator can perceive in a tactile way. We need for teleoperation systems two physical robots, while in a virtual reality system we only need one robot and the virtual environment as the other.

The medical area has actively seized on both teleoperation and virtual reality systems. In the first case, a specialist can perform surgery procedures over long distances, eliminating the need for the physical presence of either the physicians or patients in the same location [1]. Practically, the specialist can examine or operate on the patient at a different geographic location without having to travel.

In Figure 1, we showed the emblematic Da Vinci Surgical Robotic System. Operating based on a master-slave control concept. The system provides the medical expert with a realistic operating environment that includes a high-quality stereo visualization and a human-machine interface that directly transfers the doctor’s hand gestures to the instrument inside the patient [2].

Figure 1.

Da Vinci surgical robotic system.

In the second case, someone has widely used virtual reality systems for medical training simulation. With the development of computer graphics, nowadays practically any surgical procedure can be visually simulated. Minimally invasive surgery has been the most beneficial area. It has implemented virtual environments in laparoscopy, neurosurgery, and urology, to name a few [3]. As an example, in Figure 2 we showed the simulator of Transurethral Resection of the Prostate TURP Mentor. Clinicians can improve their skills without endangering their patients to avoid Live practice with humans by using this system. Tactile feedback inclusion is an important factor in the improvement of such skills. It must synchronize tactile feedback with both the virtual reality simulation and the operator’s movements.

Figure 2.

URP Mentor (simulator).

The challenge for researchers in graphic computing and control systems is to design mathematical tools that fit with the object’s physical characteristics to be simulated within the virtual environment. Regarding the visual feedback, the position where such objects are located is essential for the operator to perceive his movements in the Cartesian virtual space. About the force, reproducing rigidity and softness takes special interest when the virtual environment includes penetrable and nonpenetrable objects. Here, the complexity of the mathematical tools increases because their physical laws are not always easy to simulate on a computer. For this reason, it must establish an interchange between visual and haptic realism because of the finite capacity of digital processing.

1.1 State-of-the-art

Teleoperation and virtual reality applications have developed in areas as different as automotive and video games. However, an important application is for medical surgery [4], where the operator on the master side needs to be sure of the force he feels. Such a force, generated in opposition to his movements, must be ideally the same as that the robot applies over the patient at the slave side. It has widely used virtual reality systems in minimally invasive surgical simulation, where the operator should feel the same forces that it would feel in a real procedure, [5].

Goertz presented the first master-slave teleoperation system [6]. It was used to handle toxic waste using two coupled manipulators. Subsequently, using electrical signals, he included a rudimentary force generation device. Since then, we have used this system in areas such as micro and nano-manipulation [7] underwater exploration [8], and tele-surgery [9]. Haptic feedback takes special relevance in this last area since it is crucial for the surgeon to receive an accurate force response. The most effective way to do so is to have force sensors at both master and slave sides, and a control algorithm that guarantees an accurate tracking between the contact forces present in the remote environment and those sent to the operator [10].

The idea of force feedback on virtual reality systems began with the fundamental work of Sutherland [11]. He established that the interaction between the human operator and the virtual environment should not only be visual but also tactile. It was not until the 1990s that he adapted the Goertz device to provide force feedback during virtual molecular coupling [12]. Since then, the use of manipulators in virtual reality applications has spread to CAD/CAM assembly [13], aerospace maintenance [14], and especially in medical training through simulators [15] where, unlike systems of master-slave teleoperation, neither the environments nor the contact forces exist. It must transmit the actual forces to the operator with precision. The quality of this transmission depends on the characteristics of the haptic interface and the corresponding force control algorithm [3].

Articulated robots play a major role in medical training simulation systems with force feedback since this kind of electromechanical device can measure spatial position and generate torques. There has been a large effort to design robot haptic interfaces such as the widely used Phantom serial robot [16], the Delta Haptic parallel robot [17], and the combination of passive elements such as brakes and springs with motors [18]. Such robots are examples of impedance types devices, i.e. they read position and control force in response. There are another robots that read forces and control motion, called admittance type devices. The difference between using one or another type relies on the characteristics of the virtual environment (e.g., stiffness, inertia, damping, friction).

Along with haptic interfaces, there has been an intense development of graphical simulation tools capable of reproducing a wide range of virtual environments. The principal aim is for the operator to perceive, as realistic as possible, objects with a high quality of detail. The applications developed include microscopic exploration [19], aviation [20], and clinical neuropsychology [21], among many others. With medical training simulation, a correct synergy between visual and tactile feedback is essential to heighten the skills of medicine students. However, to increase immersion and consequently the realism of the virtual reality displays, it is necessary to model environments that combine haptic and graphics to the same complexity [22]. This is not always possible since it limited the computational processing and it cannot execute the applications in real-time.

Salisbury et al. [23] presented the basic architecture for a virtual reality application with visual and haptic feedback. They established that the force rendering algorithms must be geometry-dependent. This is a disadvantage in medical training simulation since the virtual objects to be reproduced include bones and organs with irregularities or indentations. There are cases in which the interaction occurs not only on the surfaces of the object, but we must also calculate the penetration forces, as we do in surgery simulators. The alternative is to design algorithms based on physical laws that involve the dynamic and movement of the objects when the operator interacts with them [24]. The perfect scenario would be to render forces by combining physical approaches with the most sophisticated haptic interfaces. However, as mentioned before, doing that is computationally more expensive, not to mention the high costs it would entail.

The more realistic the force transmitted to the operator, the higher the quality of the method used. The factors mentioned above cause a series of compensations between haptic and visual realism, real-time execution, and system costs. Such offsets allowed for establishing two principal methods for rendering forces from virtual environments [25]. The first one is the penalty method, which is widely used for its simplicity since it required a penetration measure starting at the contact point with the virtual object. The second approach is the imposed motion method, where it is considering the contact as a bilateral constraint and it is calculating the contact force response using Lagrange multipliers. Computer graphics and haptic applications use both methods. Ruspini et al. established the differences between using both methods [22].

It is to assume, according to the penalty method, virtual objects are formed geometrically or algebraically by defined primitives such as lines, planes, spheres, and cylinders [26]. Therefore, the force rendering depends on an implicit equation and a contact point given by a collision detection algorithm. In 1991, Sclaroff and Pentland proposed a generalization using the implicit function representation method to allow collision detection for common 3D shapes [27]. The method is to replace the polygon and spline with their previous versions. By using this technique, it is possible to use local gradients in the normal direction of the virtual surface [28]. In this sense, the concept of impedance takes special relevance since it addressed the force rendering problem as an energy exchange phenomenon, allowing to study of the stability of the haptic system [29].

Using the imposed motion method implies considering the contacts with the virtual object as bilateral constraints [25]. Therefore, haptic systems are a master-slave scheme where the rigid object, the virtual representation of the haptic interface, is the master. This entails employing Lagrange multipliers to compute the magnitude of the contact force. In 1994, Bayo and Avello designed an algorithm by considering the dynamics of a multi-body system as a constraint [30]. The advantage of this approach is that the force response can not only be in terms of a single contact point, but it can also be in terms of the dynamic characteristics of the surfaces [31]. However, the hardware requirement for haptic rendering in 3D is for the haptic interface to have at least 3 degrees of freedom.

The methods mentioned above have been the cornerstone for virtual forces generation both in graphic computing and haptic systems. One of the principal requirements in such areas is that the systems be capable of reproducing the forces that would be present during contact with rigid and soft objects. This is especially important in the development of simulators used in medical training, where the tactile sensations caused by contact between a virtual tool and bones or organs must reproduce [32]. Depending on the goals of the design, we must make a compromise between haptic and visual realism.

Advertisement

2. Preliminaries

In order to implement a virtual reality application, it is important to combine and match both visual and haptic feedback in real-time. Next, we introduce the fundamentals of virtual surface representation, which allow us to show how works the force rendering method, proposed in [33]. The mathematical nature of holonomic and non-holonomic constraints is presented by introducing some basic concepts of differential topology and the mathematical model of the haptic system. For this, a teloperation scheme is adapted considering the slave robot as virtual. It simulated its dynamic model within the virtual environment. Said robot is in contact with holonomic or non-holonomic virtual constraints, whose mathematical representation is also presented. The process for rendering the forces to be transmitted to the operator is detailed below, as well as some observations on the validity of the approach, especially regarding the differences when using both types of constraints.

2.1 Virtual surfaces representation

It is important to distinguish between the different representations of virtual surfaces, both from the point of view of computer graphics and from the point of view of haptic systems. Because the physics of light (with visual representation) differs from the physics of mechanical interactions. It is important to consider that although the graphical and haptic simulation can share the coding of certain properties, such as the shape, they must differ in other aspects, such as models, mathematical techniques, and implementation [32]. It is important to note that haptic rendering in practice avoids the many complex renderings developed by the graphics computing community over the years. However, in this section, the basics of such representations are given to introduce the fundamentals of the proposed haptic representation method, which central idea is to relate holonomic and non-holonomic constraints, which mathematically have a kinematic basis, with the rigid and soft tissues of the body. Surface dynamic complexity is avoided, and instead, from a purely haptic approach, it is classified as nonpenetrable and penetrable. By using the approach of a manipulator in constrained motion, it includes the dynamic of the virtual robot to exemplify that it is possible to model the virtual tool, in a more complex way than that of a single point probe used commonly in computer graphics [34].

2.1.1 Nonpenetrable virtual surfaces

In order to represent forces coming from rigid surfaces, it suffices to define algebraically an implicit equation in joint coordinates (task space), or at least two of its geometric characteristics such as normal and tangential vectors, or distance and angle relationships between points, lines, and planes [35]. This simplifies the graphical and haptic implementation since it is possible to define the surface as the zero set of a function f valued at R as Sf=xR3fx=0 [36]. Based on this approach, lines, points, and mainly zero-width polygons, assembled by vertices, form rigid virtual environments, [31]. For example, in Figure 3, a human skull using a triangle mesh with Phys X® by NVIDIA graphics engine is modeled.

Figure 3.

Convex decomposition by Phys X®. (a) Triangle mesh collision. (b) Convex decomposition.

In this case, a PxGeometry Class by NVIDIA, part of a geometry class, from the common base class, is used. Each geometry class defines a volume or surface with a fixed position and orientation. We can implement it in many geometry types, as simple as spheres, capsules, boxes and planes, and others more complex as convex meshes, triangle meshes, and height fields [37]. Besides, the methods to build a complex object as a human skull and its properties include triangle mesh collision and convex decomposition, Figure 3. The haptic interaction for virtual objects formed by such geometries and methods comes from a normal vector over the geometry, as we can see in Figure 4, where the haptic interaction with a sphere occurs using a collision detection algorithm of a single point.

Figure 4.

Haptic interaction with a sphere. (a) Applied external forces. (b) Collision detection. (c) Collision response. (a) Original mesh. (b) Delaunay triangulation. (c) Mass-spring like system.

2.1.2 Penetrable virtual surfaces

The case of deformable surfaces is more complex since, from a biomedical approach, a physically realistic simulation must consider all the nonlinearities of material deformation (e.g. stress, strain, elasticity, and viscoelasticity). One strategy is to combine a finite element discretization of the geometry together with a finite difference discretization of time and an updated Lagrangian iterative scheme [38]. Another very used representations of deformable surfaces in computer graphics are the particle-based model. Their location, speed, acceleration, mass, and any other parameter needed for an application describe particles and they develop according to Newtonian mechanical laws [36]. For example, in Figure 5, we model a deformable tissue using the graphic engine FLEX® by NVIDIA where, from a polygon-based mesh, a particle system is got through a Delaunay triangulation.

Figure 5.

Particle model by FLEX (soft tissue). (a) Original mesh. (b) Delaunay triangulation. (c) Mass-spring like system.

Using particle-based models allows to visually reproduce more complex processes such as cutting and slitting, processes very common in medical training. However, no matter how complicated the underlying model is, the force response because of deformation is a function of deflection only, that is, we define deformation as the displacement of the initial contact point between an instrument and a body. Deformable, even if it is a very large deformation, [32]. This characteristic allows us to study the contact from a kinematic perspective, it moved once the manner tool inside the soft object.

More sophisticated graphic tools such as SOFA (Simulation Open Framework Architecture) address the description of the object, typically by using three models: an internal model with independent degrees of freedom, the mass, and the constitutive laws, a collision model with contact geometry, and a visual model with detailed geometry and rendering parameters [39]. During runtime, it synchronizes the models using a generic mechanism called mapping, which is used to propagate forces and displacements and to enforce coherence between them. Normally, the internal model acts as the master system, imposing its displacements on the slave systems (the visual and collision model).

Let f be the function used to map the positions xm of a master model to the position xs of a slave model

xs=fxm.E1

We map the velocities similarly to

ẋs=JxmẋmE2

where the Jacobian matrix Jxm encodes the linear relationship between the velocities of the master and slave systems. In linear assignments, the operators f and Jxm are the same, otherwise f is not linear regarding xm, and we cannot write it as a matrix.

Given forces λs applied to a slave model, the mapping computes the equivalent forces λm applied to its master. Since equivalent forces must have the same energy, [39], the following relation holds

ẋmTλm=ẋsTλs.E3

The kinematic relation (2) allows to rewrite the Eq. (3) as

ẋmTλm=JTxmẋmTλs.E4

Since Eq. (4) holds for all velocities ẋm, the principle of virtual work allows us to simplify it to get

λm=JTxmλs.E5

The kinematic mappings (1), (2), and (5) allow to compute displacements and to apply forces. They are also used to connect generalized coordinates, such as joint angles, to task space geometries.

2.2 Geometry of a constrained sub-manifold

From a robot control approach, we can consider a tool in contact with an object as a robot in constrained motion. The constraints of this system will be well defined if we associate them with physically realizable forces. This occurs, for example, with an industrial robot in contact with a proper surface (a real one) like a car bonnet in a painting or welding task. But with virtual environments, where surfaces do not exist, there are no physical constraint forces associated to them. Thus, the constraints are not well defined, and they are called virtual constraints, [40]. It mathematically addressed the nonpenetrable and penetrable virtual surfaces as virtual constraints. In this sense, it is important to introduce the geometric properties of such constraints in order to define them as either holonomic or non-holonomic.

Let Q be the n-dimensional smooth manifold configuration space of an unconstrained manipulator and qRn its local generalized coordinates. The tangent space to Q at q, denoted τqQ comprises all generalized velocity vectors q̇Rn of the system.

Definition 2.1. A geometric constraint on Q is a relation of the form

hiq=0i=1,,k<n,E6

where hi:QR limits the admissible motions of the system to a nk-dimensional smooth sub-manifold of Q■.

Constraints that involve not only the generalized coordinates but also their first derivatives in the form

aiqq̇=0i=1,,k<n,E7

with aiqq̇τqQ, are called kinematic constraints. These limit the allowable movements of the manipulator to a nk-dimensional smooth sub-manifold of Q by restricting the set of generalized velocities that can be achieved in a configuration.

Definition 2.2. A Pfaffian constraint on Q is a set of k kinematic constraints, which are linear in velocity in the following form

aiTqq̇=0i=1,,k<n,E8

where aiqq̇:QRn is assumed to be smooth and linearly independent.■

A kinematic constraint can be integrable, there are k real-valued functions hiq such that

qhiq=aiTqi=1,,k<n.E9

Here, the kinematic constraints are, in fact, geometric constraints. Pfaffian constraints set of aiq is known as holonomic if it is integrable, the system has a geometric limitation. For example, by considering a set of holonomic constraints characterized by

φiq=0i=1,,k<n.E10

From Eq. (9), we get

qφiq=JφTqE11

where JφqRk×n is the Jacobian of the holonomic constraint. Therefore, holonomic constraints are characterized by equivalent equations in terms of position variables, we can get the position equations by integrating them, if velocity equations initially described the constraints, [41].

Property 2.1. Given n generalized coordinates q in a sub-manifold Q and k holonomic constraints, the space tangent to Q in a configuration can be described by adequately defining nk new generalized coordinates of the restricted sub-manifold that characterize the real degrees of freedom of the system [42].■

The set of Pfaffian constraints aiq is called non-holonomic if it is nonintegrable, the system has a kinematic limitation. Assuming again that the vectors ai:QRn are smooth and linearly independent, the non-holonomic constraints can be expressed as

Aqq̇=0E12

where AqRk×n is the Pfaffian matrix of non-holonomic constraints and which image space produces forces to ensure that the system does not move in those directions. The presence of these constraints limits the system mobility completely differently if compared to holonomic ones, even if its generalized velocities at each point are constrained to a nk dimensional sub-manifold space, it is still possible to reach any configuration in Q.

Property 2.2. Given n generalized coordinates q in a sub-manifold Q and k non-holonomic constraints, the space tangent to Q in a configuration has nk degrees of freedom but the number of generalized coordinates cannot be reduced, [43].■

Remark 2.1. We assume that velocity-level gives non-holonomic constraints Eq. (12), and position-levelEq. (10) describes holonomic constraints. In practical problems, it may describe both types of constraints as velocity-level equations.■

2.2.1 Integrability of the constraints

A vector field g:RnτqRn is a smooth mapping assigning to each point qRn a tangent vector gqτqRn. In local coordinates, we can represent q as a column vector whose elements depend on q as

gq=g1qgnq,E13

where g is smooth if each giq is smooth.

Given g1 and g2, we define the Lie bracket of these vectors fields as

g1g2=qg2g1qg1g2E14

where g1g2 is a new vector field.

A distribution assigns a subspace of the tangent space to each point in Rn smoothly. A special case is a distribution defined by a set of smooth vector fields, g1,,gm. Here, we can define distribution as

Δ=spang1gm,E15

where the span over the set of smooth real-valued functions on Rn is taken. Evaluated at any point qRn, the distribution defines a linear subspace of the tangent space

Δq=spang1qgmqτqRn.E16

A distribution is involutive if it is closed under the Lie bracket, i.e.,

gigjΔ,gi,gjΔ.E17

We said distribution Δ of dimension k to be integrable if, for every point qRn, there are a set of smooth functions hi:RnR for i=1,,nk such that the row vectors qhi are linearly independent at q and for every gΔ.

qhigq=0i=1,,nk.E18

The hypersurfaces defined by the level sets h1q=c1hnkq=cnk are called integral manifolds for the distribution Δ. Eq. (18) shows that Δ coincides with the tangent space to its integral manifold at q. We relate integral manifolds to involutive distributions by the following so-called Frobenious theorem, [43].

Theorem 2.1. A distribution is integrable if and only if it is involutive.■

This theorem gives a necessary and sufficient condition for the complete integrability of a distribution. Thus, if Δ is a k-dimensional involutive distribution, then locally there are nk functions hi:RnR such the level surfaces h=h1hnk give that integral manifolds of Δ. The result mentioned above gives conditions for the integrability of a set of kinematic constraints in the following proposition.

Proposition 2.1. [44] The set of k Pfaffian constraints, described in (8), is holonomic if and only if Δ its an involutive distribution.■

So it is possible to establish when a Pfaffian constraint is non-holonomic by checking if its distribution is not involutive.

2.3 Haptic system overview

A common practice in the computer graphics community is to associate the position and orientation of virtual tools directly with that of the haptic interface. However, this assumption is because some real tools have negligible dynamics, such as a scalpel. From a teleoperation approach, we assumed that even the simplest tool has some dynamic properties to consider in the virtual environment. This section presents a description of this proposal, both mathematical and intuitive.

In order to describe the operation of the haptic system, two independent sets of task space coordinates are considered as shown in Figure 6.

Figure 6.

Haptic system.

The operator manipulates the haptic interface, i.e., the master robot in the real environment and we denote whose Cartesian coordinates as xmSE3, where pmR3 is the end-effector position and RmSO3 its orientation. The virtual tool must respond to the movements of such interface in the virtual environment with Cartesian coordinates xvSE3, where pvR3 is the virtual tool position and RvSO3 its orientation. In a teleoperation context, the position of the master robot acts as a reference for the virtual tool and it is visually projected on the screen through the virtual avatar of the system. The operator moves the virtual tool freely until the collision detection algorithm shows that contact with the virtual surface is taking place. Until now, the master robot exerts a force that is measured by a force sensor that serves as a reference for the virtual robot and must apply to that surface. By closing the feedback loop, the control algorithm produces a tactile sensation for the operator. Ideally, both visual and haptic feedback must coincide, allowing the operator to have a visual reference to the virtual tool and the feeling of the dynamic changes of its contact with the virtual surface.

In a similar approach, Faurlin et al. clarify in [44] that the virtual environment can be represented by a set of generalized coordinates qvR3, which are related to the task-space coordinates of the master robot by a nonlinear kinematic equation

xm=fqvE19

which is a mapping between the real and virtual environment, similar to that of Eq. (1) but where the former acts as a master model.

The set of coordinates qv allows the dynamic model of the virtual tool to be described in terms of Euler-Lagrange equations of motion. A set of holonomic or non-holonomic constraints that represent the virtual surface can be embedded into the kinematic mapping (19), relating independent master robot task-space coordinates and dependent virtual robot task-space coordinates. The virtual tool moves according to the physic simulation propagated in the virtual environment coordinates and always satisfies such constraints.

2.3.1 Dynamic model and properties

Consider a real master (m) and a virtual slave (v) robot system composed of two manipulators, each of them with n degrees of freedom but not necessarily with the same kinematic configuration. Each robot spans a k-dimensional task space and, based on master/virtual devices, can be scaled to meet the desired virtual application. The master dynamics is given by

Hmqmq¨m+Cmqmq̇mq̇m+Dmq̇m+gmqm=τmτhE20

while the virtual slave dynamics is modeled by

Hvqvq¨v+Cvqvq̇vq̇v+Dvq̇v+gvqv=τv+τsE21

where the subscripts m and v denote the real master and the virtual slave manipulators, respectively. For i=m,v,qiRn is the vector of generalized coordinates, HiqiRn×n is the inertia matrix, Ciqiq̇iq̇iRn×n is the vector of Coriolis and centripetal forces, Diq̇iRn×n is a diagonal matrix of viscous friction coefficients, giqiRn is the vector of gravitational torques, τiRn is the vector of generalized inputs, τhRn is the real torque applied by the human operator on the master side and τsRn is the virtual torque generated because of the contact with the virtual constraint [45].

Property 2.3. With a proper definition of the robot parameters, it is possible to express the robot dynamics as

Hiqiq¨i+Ciqiq̇iq̇i+Diq̇i+giqi=Yiqiq̇iq¨iθiE22

where Yiqiq̇iq¨iRni×l is the regressor and θiRl is a constant vector of parameters.■

Assumption 2.1. The master and the virtual slave robots share the same geometric structure, but they do not necessarily have the same parameters of the dynamic model, that is, the matrices and vectors of the models described in (20) and (21) do not have to be the same.■

External torques are acting in both robots, either the real torque τh applied by the human on the master side or the virtual torque τs generated because of the contact between the virtual robot and the virtual surface. We can define the torque applied by the human operator as

τh=JmTqmFhE23

where FhR3 is the force applied by the operator in the task-space coordinates and JmTqmR3×n the geometric Jacobian of the master manipulator. In the same way, the torque applied on the virtual surface can be expressed as

τs=JvTqvFsE24

where FvR3 is the force applied on such surface in task-space coordinates.

2.3.2 Virtual holonomic constraints

Whit of holonomic constraints we assume that, in virtual task space coordinates, the virtual robot is subject to k virtual holonomic constraints characterized by

φvxv=0E25

where a suitable normalization is done for the gradient of this constraint,

Jφxvxv=φvxvRk×n,E26

to be unitary.

The representation of constraint (25) in generalized virtual coordinates is

φvqv=0E27

where qvRn is the vector of the virtual robot end-effector joint coordinates. The gradient of the constraint (27) is

Jφvqv=φvqvRk×n.E28

These two gradients are related by

Jφxvqv=JφxvxvJvqvE29

where JvqvRk is the geometric Jacobian of the virtual manipulator. Hence, the torque because of the contact with the virtual surface in (1) can be defined as

τs=JφvqvλvE30

where λvRk is a vector of Lagrange multipliers that represents the virtual force applied over the surface. Then, it is possible to rewrite the whole Eq. (21) as

Hvqvq¨v+Cvqvq̇vq̇v+Dvq̇v+gvqv=τv+JφvTqvλv.E31

According to Property 2.1, the virtual holonomic constraints (2.26) reduce the number of degrees of freedom of the virtual robot and the dimension of its configuration space to a nk–dimensional sub-manifold, [43].

2.3.3 Virtual non-holonomic constraints

With non-holonomic constraints, something well known is that we cannot express them as a function of only the generalized coordinates as in (25) or (27). Instead, they are commonly expressed as Pfaffian constraints. In the present case, these constraints are written more intuitively in terms of the virtual end-effector velocities vv=ṗvωvT as

Avxvvv=0,E32

where ṗvR3 and ωvR3 are the linear and angular velocities of the virtual end-effector, respectively, and AvxvRk×n is the corresponding Pfaffian constraint matrix. If the dynamic equations are defined in the virtual joint-space coordinates qv, these constraints are projected via Faurling et al. [44].

Avqv=AvxvJvqvE33

Assuming that the virtual robot is subject to k velocity-level equations of non-holonomic constraints characterized by

Avqvq̇v=0E34

the torque because of the contact with the virtual environment in (21) can be expressed as

τs=AvTqvλvE35

where λvRk is the vector of Lagrange multipliers which determines the magnitude of the constraint forces over the virtual surface. Then, it is possible to rewrite Eq. (21) as

Hvqvq¨v+Cvqvq̇vq̇v+Dvq̇v+gvqv=τv+AvTqvλv.E36

The non-holonomic constraints reduce the number of virtual robot available degrees of freedom to an nk-dimensional sub-manifold, but they do not reduce the dimension of its configuration space [43, 46].

Advertisement

3. System implementation

In this section, the theoretical and practical aspects of implementing a virtual reality system with virtual restrictions are presented. The principal aspect concerned is the design of a controller capable to perform accurate haptic feedback that makes to feel the operator to be in contact with either a penetrable or nonpenetrable virtual surface. The method for visually reproducing the virtual tool in contact with the virtual objects is presented. It is important to note that this method avoids the complexity of the virtual environments currently implemented in the simulators used in medical training. However, the basic aspects addressed to show that the virtual constraints approach can be used practically, and eventually adapted to sophisticated graphic computing tools.

3.1 Virtual environment design

As mentioned in Section 2, the important aspect to get realistic haptic feedback from a surface embedded into a virtual environment comprises defining its geometry. In Figure 7, we show an idealized representation of a virtual point probe in contact with either an nonpenetrable or penetrable virtual surface.

Figure 7.

() Penetrable and (__) nonpenetrable surfaces.

In the first case, we assume that the contact arises at a single point over the surface from where the virtual probe cannot move forward, i.e., its velocity is equal to zero. Therefore, a normal force vector, which magnitude increases depending on the force applied by the operator, avoids motion. In robot control, if we are supposed to connect the probe to the end effector of a manipulator, according to Property 2.1, the number of degrees of freedom of the system in contact with the surface is reduced. Intuitively, that means that the virtual probe cannot move forward from where the contact arises, which can be at any point on the surface [47]. Actually, if the virtual object is built up by using a polygonal method, there will be a set of surfaces φ0φ1φn joined by vertices as shown in Figure 7.

The best way to find the place of the contact point (which belongs to a set of points defining each surface) is by establishing an implicit equation φvxv containing such a point. The set of points defining the virtual surface are

φvxv=0,E37

which coincides with the holonomic constraint of Eq. (25) in Cartesian coordinates or Eq. (27) in generalized coordinates. From those expressions, it can establish a collision detection algorithm by defining the following conditions:

  • If φvxv>0, the virtual probe is in free motion, i.e., it is not in contact with the virtual surface.

  • If φvxv=0 the virtual probe is in contact with the virtual surface and it stays over the surface only, staying in constrained motion.

  • If φvxv<0 virtual probe is in restricted movement, but we violate restriction i.e., it is inside of the virtual surface.

For the virtual reality system, it is important to remember that the vector xv represents the virtual robot’s end-effector position. The virtual probe would share such a position. By extending the approach proposed, the virtual robot acts in fact as the virtual tool and it projected its position to the operator by the avatar of the system. In the second case, for an nonpenetrable constraint, even when the contact starts at a single point, the properties of the surface allow the virtual probe to stay in motion, as we can see in Figure 7. Intuitively, the process is more complex since, at a certain moment, the virtual probe must stop. In a medical context, that means the deformable tissue has a limited resistance that depends on its elastic properties. Since the aim is to get a reaction force that depends on the motion of the virtual probe once inside the deformable object, it is necessary to describe such motion properly. Unlike the non-deformable surfaces, the force does not arise normally to the surface at a single point, but lateral forces occur when the operator tries to move the virtual probe in such directions. Ideally, this would be true for any method to represent soft tissues, including finite element meshes and particle-based models. Figure 8 shows the virtual tool motion inside a virtual object.

Figure 8.

Motion in 2D of the virtual tool inside a virtual object.

For easy visualization, it showed the motion in 2D but during the simulation, we must reproduce it in 3D with the aim of increase the realism of the application by improving the operator’s dexterity. The contact begins in stage A where the virtual tool penetrates the object by following a straight trajectory, represented by a blue arrow, to reach the position in stage B. It can follow other trajectories, represented by dashed red lines, to reach the position of stage D or stage C. However, because of the surrounding tissue, the tool cannot move laterally because of the reaction forces (represented by the red arrows) preventing it along the trajectories. In contrast to what happens in the holonomic case, the virtual tool may stay in motion, i.e., its velocity differs from zero until the operator stops voluntarily. The process described above is like the motion of a wheeled car in 2D, which is perfectly described by non-holonomic constraints. In fact, if we add a third dimension, Property 2.2 remains forever and the virtual tool can reach any point on the virtual object.

The trajectories of the virtual tool shown in Figure 8 are common in noninvasive surgical procedures. For example, in the simulator of Transurethral Resection of the Prostate (TURP Mentor) of Figure 2, the medical trainee performs a straight trajectory that, in the first place, simulates the insertion of the resectoscope into the patient’s penis. Once within the virtual prostate, the resectoscope needs to be moved to remove, through an incandescent resection loop, the benign tissue that obstructs the flow of urine to the urethra. These movements follow a similar route to the one represented with red dotted lines in Figure 8. These movements use a pivot where the tool changes its direction.

The contribution of this approach is, in contrast to common single point haptic feedback methods, that we produce forces that prevent lateral movement of the virtual tool. However, the major disadvantage is that the environment’s elastic properties are not considered. As a result, the reaction of the force that limits the movement of the operator, depending on such properties, is not reproduced and can move the virtual tool interchangeably within the virtual object, which does not happen in real life. For example, in human organs, elastic properties and parameters such as Young’s modulus or Poisson’s ratio establish limits of movement for the tool that, when exceeded, the tissue is damaged.

3.1.1 Virtual environment design

In Figure 9, we show a scheme in 2D of the contact between a virtual tool and a virtual surface to illustrate the use of the model given by Eq. (21).

Figure 9.

Virtual robot in interaction with a penetrable surface.

We attach the tool to the virtual robot’s last DOF, acting as its end-effector. It is important to note that the manipulator dynamic model is used to reproduce a classic bilateral teleoperation system and assuming that, since it is simulated digitally, it can be exchanged by a simpler or more complex model, including those of medical instruments such as forceps endoscopes, gripers, and retractors. This assumption leads to the proposition that, if we use the model of a surgical tool during the simulation, the realism of the contact with the surface would increase. The principal difference between defining a holonomic and a non-holonomic constraint is a need for an expression of φvxv. Based on Section 2.1.1, from an implicit representation approach, we build rigid virtual objects from 3D basic geometric primitives as cones, pyramids, planes, cubes, and spheres, [22]. Ultimately, the base of a highly complex virtual environment composed of rigid objects is a set of basic geometric shapes that we can represent through mathematical expressions. Therefore, it is enough to define a zero set of functions as in (25) that are individually expressed as φvxv and a collision detection algorithm based on the inequalities stated above. For non-holonomic constraints and considering again the virtual robot of Figure 9, let 0pvR3 be the Cartesian position of the virtual robot end–effector and 0RvSO3 a rotation matrix that describes its orientation. Dividing this rotation matrix into three column vectors as

0Rv=0xnv0ynv0znv,E38

for which each column represents a vector of the end-effector coordinate frame, described in the base frame. This allows defining Pfaffian constraints like (29) in an intuitive form, i.e.,

Av0xnv,0ynv,0znvvv=0.E39

We claim that a set of non-holonomic constraints can be defined if the manipulator degrees of freedom are greater than those necessary to control the end-effector position, i.e., n>2 for planar robots and n>3 for robots in a three-dimensional workspace. The end-effector velocities of the virtual robot can be described by

vv=0ṗv0ωnE40

where ωn is the angular velocity over an axis normal to the robot plane, and 0ṗv is the linear velocity defined as

ṗv=0ṗvx0pvyE41

If the robot may not move in the 0ynv direction, the corresponding Pfaffian constraint is given by

0ynvT0vv=sinqv1+qv2+qv3cosqv1+qv2+qv300ṗvx0ṗvy0ω̇n.E42

By choosing the distribution

Δ=g1,g2E43

where

g1=cosqv1+qv2+qv3sinqv1+qv2+qv30E44
g2=001E45

as a basis for the null-space of the Pfaffian matrix, the equivalent control system

q̇v=g1u1+g2u2E46

can be constructed, representing the directions of allowed motion, [42].

It is easy to verify that the Lie bracket is

g1g2=sinqv1+qv2+qv3cosqv1+qv2+qv30,E47

which shows the non-involutivity of the distribution and thus establishes the non-holonomic nature of the constraints according to Proposition 2.1.

Notice also that if the degrees of freedom were 2, the null-space would be of dimension 1, which is necessarily involutive, and the constraints would be holonomic.

3.2 Position-force controllers design

A correct haptic rendering largely depends on the force control algorithm. In classic haptic systems, the common solution is to define indirect impedance or compliance control schemes. In contrast, in this section, we present two hybrid control algorithms for haptic interaction with virtual constrained systems. As mentioned in Section 2, the usual practice is to associate the position of the haptic interface directly with that of the virtual avatar. Therefore, a position control scheme is unnecessary, as the operator’s movements are reflected in the graphical application accurately. However, in the proposed approach, the task space coordinates of the virtual environment depend on the correct tracking between the position of the haptic robot and that of the virtual one, i.e., the control algorithm generates the virtual environment itself. This is due to including the virtual robot dynamics and the fact that the operator should feel the virtual tool because of the masking effect. In order to address this, we explored a control scheme used in teleoperation to achieve both position and force tracking. Next, we show a block diagram of this scheme in Figure 10.

Figure 10.

Block diagram of the proposed scheme.

Considering once again i,j=m,v where ij. We define

qditqjtE48

as the desired position trajectories, and

q̇ditq̇jtE49

as the desired velocity trajectories, i.e., if i=m, then j=v and vice versa.

We define the corresponding tracking error as

Δqiqiqdi.E50

Based on [48], we proposed

si=q̇iq̇di+ΛxiΔqi.E51

and

σ̇i=Kβisi+signsi,E52

where KβiRn×n is a positive definite diagonal matrix and

signsi=signsi1signsinE53

with sij element of si for j=1,,n.

Now, by considering the velocity reference as

q̇ri=q̇ri+ΛxiΔqiKγiσi,E54

where KγiσiRn×n is a positive definite diagonal matrix.

We define also the auxiliary variable

sai=q̇iq̇ri.E55

By supposing that both robots are in free movement, for that case, the control laws for the master and the virtual robots are proposed as

τm=Kamq̇mKpmsamE56
τv=Kavq̇v+Kpvsav,E57

respectively, where Kam,Kpm,Kav and Kpv are positive definite diagonal matrices.

3.2.1 Virtual holonomic constraints

Making an approximation of what happens during the tactile interaction of a point probe with a rigid surface, we considered the one-dimensional case (φv:RnR). As mentioned in Section 3.1, that is ideally the normal force generated at a single point of contact where other reactions, as friction or tangential forces, we can omit them, [32]. In order to reproduce this effect, we use the implicit surface method, which λv=λv represents the normal force of the virtual manipulator over the virtual surface. To reflect such contact force, the Generic Penalty Method computed a Lagrange multiplier as used by [49], i.e.,

λv=αvφ¨vqv+2ξωnφ̇vqv+ωn2φvqvE58

where ξ,ωn>0. Considering that force measurements are available at the master side in Cartesian coordinates and mapping the virtual force to this space as

Fv=JφxvTxvλv,E59

we use a PID-like controller for the virtual reality system. Consider that FhR3 is the normal force component measured with a force sensor mounted at the master robot end-effector. After (23), (24), and (58), we can use a PI controller for the virtual reality system.

By defining

FditFitE60

as the desired force trajectory where if i=h then j=v and vice versa, as stated before.

The force tracking errors are

ΔFi=FiFdiE61

and the corresponding integral, the momenta tracking error, is

Δρi=0tΔFidt.E62

Note that we use the standard notation for momenta ρ, although also the same notation is for the position. We claim that there is no confusion because it always appears Δρ in that case. Instead of using Eqs. (56) and (57) to describe the master and the virtual robot, we gave the corresponding control laws by

τm=Ymqmq̇mq¨mθmKamq̇mKpmsam+JmTqmFvKfmΔρhE63
τv=Kavq̇v+KpvsavJvTqvFhKfvΔρv,E64

respectively, where KfiRn×n are diagonal matrices. In order for the operator to feel the virtual tool in contact with the virtual environment, we can carry out a dynamic cancelation of the master manipulator dynamics, as shown in (63).

3.2.2 Virtual non-holonomic constraints

In contrast with the holonomic case, when the constraints are non-holonomic, we cannot define them as a function of a set of generalized coordinates, as stated by the Frobenius theorem. As a result, we cannot compute the Lagrange multipliers as in (58). We define these constraints in the form (31) or equivalently (34). One problem arising from these constraints is how to compute the Lagrangian multipliers to satisfy (36). These multipliers represent the forces required to maintain such constraints. Unfortunately, most of the methods used to calculate the lagrange multipliers are designed for systems with holonomic constraints [30, 49, 50] and, therefore, these methods require a position-level definition of the Pfaffian constraints as in (25) or (27). As stated in (45), the calculation presented in [42] can be used for this case. However, it is well known that this solution is unstable since its underlying mechanism is a second-order integrator with zero input. In this work, a modification of the approach used in [50] is proposed as follows. For simplicity, we define

Hv=HvqvE65
Cv=Cvqvq̇vE66
gv=gvqvE67
Av=AvqvE68
ψv=ψvqvq̇v=Avqvq̇vE69

Then, the Lagrange multipliers can be computed as

λv=AvHv1AvT1ψ̇Ȧvq̇vAvHv1τvCvq̇vDvq̇vgv,E70

where the constraints are forced to satisfy

ψ̇+2αvψv+βvt0tψvdϑ=0E71

with αv,βv>0 chosen to ensure rapid convergence to the origin.

Note that the constraint function ψ can be defined in terms of the velocities of the end-effector,

ψ=ψxvvv=Avxvvv.E72

Therefore, the initial condition of the integral term on the left-hand side of (72) can be set to zero. Each element λv is a function of qv,q̇v and τv since the constraints change with the configuration, velocity and virtual applied force.

By substituting (71) in the motion Eq. (72), a complete description of the dynamics of the system is gotten. Regarding force, sensor measurements Fv on the master side can calculate the real Lagrange multiplier as

λm=AvAvT1AvFh.E73

By defining

λditλjtE74

as the desired force trajectory in joint space. The corresponding integral is

Δλi=0tλiλdjdtE75

Finally, instead of (56) and (57), for the master and virtual robot the proposed position-force control for a virtual dynamic system subject to non-holonomic constraints is

τm=Ymqmq̇mq¨mθmKamq̇mKpmsam+AvTqvλvKfmΔλmE76
τvm=Kavq̇v+KpvsavAvTqvλmKfvΔλvE77

Note that the novelty of the approach is not the control scheme because very well-known techniques are employed, but the novelty lies in the effective use of non-holonomic constraints to describe penetrable virtual surfaces. Therefore, a technical stability proof is not provided, but it shows a set of reliable experiments in the next section with the aim of validating the proposed approach.

3.3 Visual components of the virtual environment

A fundamental part of the developed virtual reality system is visual feedback. In dynamic systems and control research, there is no interest in including such elements but in real-world applications, as surgery simulators, it is essential. Nowadays, in those developments, we compose the virtual environments by merging several numeric techniques that, combined with the fast velocity of today’s processors, give the virtual objects and surfaces a realism that before would seem impossible to reach., since the goal of this dissertation is to show how a teleoperation control scheme can be used in a virtual reality system, we design the environment by using the fundamentals of graphic computing. The tool used to design the virtual environment was the graphic standard OpenGL 2.0 which is an API, which is a software library for accessing features in graphics hardware. It contains different commands that are used to specify objects, images, and operations needed to produce interactive three-dimensional graphic applications, [51]. Among those operations, the possibility to give texture1 and lighting to the virtual objects is possible, besides proportioning position and orientation changes to the scene’s camera, i.e., the way the operator sees the images on the computer screen regarding height, deep, viewing angles as pitch, roll, and yaw, etc. As we can see in Figure 11, the environment of the developed application comprises a motionless floating sphere and the virtual avatar of the system.

Figure 11.

Virtual environment developed in OpenGL 2.0.

For simplicity’s sake, there are no changes in the camera’s position and orientation, but we gave lighting and texture to the scene. We appreciate a notable difference in Figure 12, where the avatar has no lighting, the quality of its texture is less and the background color changes.

Figure 12.

Virtual avatar of the system.

Here, a set of aligned cylinders becomes the avatar, which directly related position and orientation to those of the end-effector of the haptic interface. Since OpenGL has the instructions to create elements from primitives, the generation of such cylinders and the floating sphere was straightforward.

3.3.1 Rigid sphere

It is important to remember that, from a haptic rendering approach, we established the classification of nonpenetrable and penetrable objects for rigid and deformable objects, respectively. With the rigid object, the Open GL instruction glutSolidSphere() build automatically a solid sphere with a specific radius by defining the number of subdivisions around (sectors) and along (stacks) the z axis, as we can see in Figure 13.

Figure 13.

Sectors and stacks of a solid sphere.

We give the effect of rigidity because that vertex’s position is not changed when contact with the virtual avatar occurs. However, giving the haptic effect of highly rigid objects to the operator was difficult since an impedance device was used. For such reason, it is important to establish the control scheme (63) and (64) that compensates, as possible, the limitations of hardware and make feel it produced contact with a rigid object.

3.3.2 Deformable sphere

The real challenge comes when the virtual object is deformable. Saying that an object is deformable has many implications, mainly related to mechanics. Therefore, the visual effect of deformation is more complex than that produced by stiffness, because a real deformable object has an infinite number of degrees of freedom. For this reason, virtual objects need a high resolution, which gives a better rendering quality to visual and haptic feedback [52] and more realism to the application. However, this is always limited to the computational resources available. We must run both the graphics and the control algorithm in a single program, with the smallest sample time that the system allows. If the graphics part occupies more processing resources, this sampling time will increase and there will be unwanted effects, delays, and, finally, an application crash. We drew the sphere by defining four vertices a,b,c and d, which form a plane that is replicated iteratively according to several parallels p and meridians m defined by the operator. We intrinsically linked the value of the iteration to resolving the sphere. Figure 14, it showed the deformable sphere with different resolutions. In the sphere on the left, the values used were p=m=50 while in the central sphere were p=m=100. For the sphere on the right, the values were p=m=150.

Figure 14.

Surface mesh generated using different values for p and m. (a) p = m = 50. (b) p = m = 100. (c) p = m = 150.

As mentioned before, the more the resolution of the sphere, the more the realism of the application. However, the computational processing when using the resolution of the last case did not allow a correct performance of the graphic or haptic part. For this reason, a resolution p=m=100 was chosen. For simplicity’s sake, the sphere was built by placing two hemispheres, one above the other, regarding a common axis. The contact with the virtual avatar will arise in a single point xvyvzv computed parametrically as

xv=rcosαcosβE78
yv=rcosαsinβE79
zv=rsinαE80

where r is the radius of the sphere, and α and β are the angles from whose ranges parallel and meridians are drawn. For code optimization, meridians have the range of 0,360 and parallels of 0,180. Such ranges correspond to the upper hemisphere while the lower is drawn by considering the zv axis negative part.

Every vertex a,b,c and d must take the value of Eqs. (78)(80) in order to visualize its initial position in the virtual environment. To improve the interaction with the virtual avatar, we include an offset roff in the equations that define the vertices as

vnx=rroffcosαcosβE81
vny=rroffcosαsinβE82
vnz=rroffsinαE83

where n=a,b,c,d and roff take an arbitrary value defined experimentally. The contact with the virtual avatar occurs in some plane defined iteratively by (81)(83) using a collision detection algorithm consisting on validating the value of each vertex of every plane and comparing the value of each component xvyvzv with the position of the master robot pm. If such values belong to the range of the plane, then the avatar is in contact with the sphere. The next step is to produce the effect of motion of the contact plane. Algorithm 1 shows the pseudocode of such a process, which uses an auxiliary normal force in which direction the plane will move. This force does not belong to that calculated using the non-holonomic constraint but the gradient of Eq. (29) and we use it only for visual effect. The process described above makes only one plane to move and, if that happens, the deformation effect is not realistic. For this reason, moving the adjacent planes is necessary. It moved the more planes, the more realistic the effect of the object deformation. However, in the application developed, we changed only the position of the surrounding planes to the contact plane, since the method is purely geometric and not based on continuum mechanics. Using such an approach implies a lot of considerations that are beyond this research.

Algorithm 1 shows the part of the pseudocode where the modification of the adjacent planes takes place. This does not occur at the same time as the modification of the contact plane, since the code has not yet created these. Figure 15 shows the visual effect that the motion, both of the contact plane and adjacent planes produce. It is important to note that the code implemented needs to be optimized and, above all, to be adapted to the force rendering algorithm through non-holonomic constraints.

Figure 15.

Deformation effect of the sphere. (a) Contact. (b) Deformation effect.

Advertisement

4. Experimental platform

The experimental platform comprises a Geomagic Touch haptic robot with six revolute joints, having only the first three of them actuated. An ATI Nano–17 six-axis force sensor is adapted at the last link, as shown in Figure 16. A PC executes the control loop with a sample time of T=2 milliseconds.

Figure 16.

Experimental platform.

As mentioned in Section 3, the virtual environment comprises a sphere developed using the graphic standard OpenGL 2.0. We should note that both the control algorithm and the graphic simulation run in the same application developed in Visual Studio/C++.

A practical limitation of the Geomagic Touch robot is that it actuated only the first three joints. Therefore, a projection of both the force reflection and the controller torques is necessary, i.e., the contribution of the last two joint torques is neglected. The virtual robot does not have this limitation, and therefore is considered to be fully actuated. The master robot limitation is not so restrictive, since the virtual environment considers only force but not end-effector torque feedback, avoiding the problem of sensor/actuators asymmetry haptic interfaces [53]. The contribution of the last two joints to the force reflection is much less in magnitude when compared with the contribution of the first three joints.

4.1 Task description

A detailed description of the interaction process between the virtual tool and the virtual environment is presented, simulating separately a rigid and a penetrable sphere. Since the goal of this research was to extend the use of the control scheme to medical training applications, we adopted the shape of the avatar as a needle, as we can see in Figure 11. In medicine, procedures that use this tool are very common, with needle insertion being the most studied and simulated procedure [54]. In this procedure, the operator takes a sterile needle and slowly brings it closer to the patient, once in contact, the operator must be very careful and, through a tactile sensation, know if the soft tissue (muscle, organ) or rigid (bone) has been affected. In both cases, the contact surface produces a reaction force in opposition to the operator’s movements.

While on a rigid surface, the force does not let the needle penetrate the tissue, in a penetrable surface this is possible. The force behaviors are different, as in the first case, there is a major contribution in the normal direction, which would allow the operator to move the needle laterally over the surface. In the second case, the normal force contribution is smaller and the surrounding tissue would not allow moving the needle in the lateral directions.

In the approach presented, we assume we attach the virtual tool to the end-effector of a five degrees-of-freedom manipulator, which is not visible in the graphic simulation. It may seem counterintuitive because, evidently in real life, a needle does not have such dynamics. We use the robot model as a demonstrative example of other medical tools such as an endoscope, resectoscope, forceps. Attached to teleoperated surgical robot arm have such complex dynamics that must be modeled. The graphic simulations in those cases include pulling, gripping, clamping, and cutting, and therefore it is convenient to have a complete description of both the kinematics and the dynamics of the tool-tissue interaction, [3]. The task starts with the Geomagic Touch robot in its home position. The operator grasps the master robot stylus using the force sensor adapter tip to later gently bring it closer to the virtual surface. We imposed the desired trajectory in free motion in this way. The virtual robot moves following such a trajectory in the virtual environment, with no scaling between the virtual and the real workspaces. It perceived visually both the avatar movement and the virtual surface through a computer screen. When the collision-detection algorithm detects contact with the surface, the force-response algorithm generates a virtual force trajectory by computing the corresponding Lagrangian multipliers, either by employing (58) or (70). The operator perceives an interaction force exerted by the master robot and registered by the Nano-17 force sensor until the contact is over. Finally, the operator returns the sensor adapter to its initial position, thus completing the task.

4.2 Holonomic constraint experiment

For simplicity’s sake, the surface used to test the validity of the proposed approach is a sphere described by

φvxv=xvh2+yvk2+zvl2r2=0,E84

where xv=xvyvzvT is the vector that stands for the virtual environment task-space coordinates, r=0.1 [m] is the radius, and hkl=0.4,0,0 [m] are the sphere center coordinates. It is important to note that, in contrast with other works, we added a third dimension zv in order to heighten the realism of the virtual reality application. For example, in [44, 49], the authors consider only two dimensions to test different control schemes for a haptic and a teleoperation system, respectively. The gains of the master manipulator, described in (63), and the virtual manipulator, are shown in Table 1.

Variable (control law)ValueVariable (virtual manipulator)Value
Kam0.0550 IKav0.20 I
Kpm0.0055 IKpvdiag0.2,0.2,0.2,0.1,0.1
Kfm10.050 IKfv0.20 I
Λxm0.2500 IΛxv20.0 I
Kβm0.0100 IKβv1.00 I
Kγm0.0150 IKγv0.20 I

Table 1.

Gains from the control law and the virtual manipulator (Holonomic constraint experiment).

I is the identity matrix, which has the appropriate dimensions.

Finally, by using the Generic Penalty Method, the surface parameters are αv=0.002,ξ=100 and ωn=200.

4.3 Non-holonomic constraint experiment

As mentioned before, we cannot express a deformable surface implicitly, even when the operator perceives it as a sphere both visually and haptically. We use a discrete representation similar to that presented in [52], where we assume the surface to comprise many neighboring planes defined by shared nodes. We propose a technique that comprises iteratively choosing a small neighborhood of planes where the contact will occur, depending on the position of the virtual tool. Subsequently, we associate the Lagrange multipliers in Eq. (70) with a pair of planes using the impulse-based technique for multiple rigid body simulations, [55]. The micro-collisions with this technique occur only in the chosen vicinity of the sphere and Lagrange multipliers replace the impulses preventing body interpenetration. In the collision’s case detection algorithm, the implicit surface representation replaces the convex polyhedra decomposition, [56], using the Eq. (84). We did this for ease and to reduce the computational cost of the application, otherwise, the control algorithm sample time would increase. Considering the case where the needle is inside the sphere, but it may not move laterally. However, it may pivot to change orientation.

We adequately describe this kind of scenario by employing non-holonomic constraints. As mentioned in Section 1, non-holonomic constraints have been little exploited to represent the interaction with penetrable surfaces. For example, in [57] it is claimed that to model a surgeon’s scalpel both holonomic and non-holonomic constraints could be employed by limiting the depth of its incision and the direction of its motion, respectively. However, there is not any analysis or modeling of this process in such work.

The most representative proposal is the derivation of the non-holonomic generalized unicycle model presented in [58], where a coordinate-free representation is used to model the insertion of a flexible needle into soft tissue. We employed a similar approach by using the homogeneous matrix representation, but taking into consideration both the kinematic and the dynamic model of the virtual robot and the fact that non-holonomic constraints are more intuitively got if we define them in task space coordinates. The computation of the Lagrangian multipliers for non-holonomic constraints, which is proposed in (71), is an important improvement regarding the cited works. The experiment comprised five degrees of freedom virtual manipulator interacting with a deformable sphere. Once in contact, the end-effector may not move laterally, i.e., along the 0y5v and 0z5v axes, after a conventional Denavit-Hartenberg allocation, but it may move along the 0x5v axis, i.e., along the pointing direction of the end-effector. The end-effector may rotate (pivoting) to change direction (as a three-dimensional version of the non-holonomic unicycle) and the Pfaffian matrix is computed as

Avxvvv=0y5v01×30z5v01×3ρ̇vωv=0E85

The experiment has four steps, as shown in Figure 17. First, the virtual robot is in free motion and only the teleoperation part of the controller (77) is active. In the second part, we insert the needle into the sphere. Next, in the third part, approximately 45 degrees rotate the needle without changing its position. Finally, we insert the needle deeper into the sphere with a new orientation.

Figure 17.

Interaction sequence between the avatar and the non-holonomic virtual surface. (a) Free motion. (b) Insertion. (c) Pivoting. (d) Insertion.

The force of the human operator in the lateral directions of the needle is difficult to measure directly with the force sensor. Instead, we take advantage of the projection of such forces on the torque of the master manipulator joint, we calculate λm from (20), (23), and (73).

The gains of the master manipulator, described in (76), and the virtual manipulator, are shown in Table 2.

Variable (control law)ValueVariable (virtual manipulator)Value
Kam0.0550 IKav0.20 I
Kpm0.0550 IKpvdiag0.2,0.2,0.2,0.1,0.1
Kfm0.010 IKfv2.00 I
Λxm0.2500 IΛxv20.0 I
Kβm0.0150 IKβv1.00 I
Kγm0.0150 IKγv0.20 I

Table 2.

Gains from the control law and the virtual manipulator (Non-holonomic constraint experiment).

I is the identity matrix, which has the appropriate dimensions.

Advertisement

5. Conclusions

We present a proposal on chaptic interaction with holonomic and non-holonomic virtual constraints. Since extensive research on haptic interaction with rigid surfaces has been presented in the literature, the principal aim was to reproduce the forces generated by the interaction with soft surfaces from a force feedback approach.

Throughout the document, we introduced the theory to establish an optimal relationship between the visual and haptic interaction for the virtual reality system developed. The key lies in adapting the mathematical properties of the holonomic and non-holonomic constraints to the tool’s kinematics in contact with a nonpenetrable and penetrable virtual surface, respectively. However, it is important to note that we made this adaptation to achieve haptic feedback purposes only and consider the basic contact properties of simulated rigid and soft tissues.

Adapting a teleoperation control scheme to a virtual reality system was the strategy to follow since it allowed embedding a robot’s dynamic model into the virtual environment. By doing this, we addressed the teleoperated slave system as a problem of a virtual robot in constrained motion and either holonomic or non-holonomic constraints gave whose contact force.

We studied the differences between one or another representation, both mathematically and intuitively, and the particularities of each one. Among them, there is the fact that they could render forces using the Generic Penalty Method or the Pfaffian constraints matrix, respectively. We centered the interest on reproducing the contact with a soft surface by employing non-holonomic constraints.

The principal use detected is that such a method can render similar forces to those arising from contact between a tool and a penetrable surface. We can see, non-holonomic constraints have been used to reproduce the operator’s tactile sensations with practical meaning. We can eventually use this approach in virtual reality medical simulators, and it presented the fundamentals to do that throughout this document. However, adapting the developed method to complex virtual environments, such as those found in the medical field, requires more research both in control and computer graphics. In the first case, adapting more accurately the teleoperation controller presented is necessary and makes it fit with the current methods of graphic computing. Finding more optimal ways to model force using non-holonomic constraints is essential to heighten the realism of the applications. Regarding graphic computing, it is necessary to design numerical methods that adapt more efficiently to the control algorithms designed and that are capable of running with continuum mechanics models.

Naturally, this will increase computational processing and require more analysis to establish the compensation between real-time processing and control performance, without sacrificing application realism. All this requires a wide range of knowledge that does not belong to control or computer graphics.

Advertisement

Acknowledgments

The authors are grateful for the support provided by PRODEP-BUAP ID90934 and the DGAPA-UNAM under grant IN117820.

Advertisement

Algorithm 1: Deformable sphere

Require: Number of Parallels p.

Require: Number of Meridians m

Require: Radius r

Require: Offset roff

Define: Vertex vax,vay,vaz

Define: Vertex vbx,vby,vbz

Define: Vertex vcx,vcy,vcz

Evaluate:Δ1=180/p and Δ2=360/m

1: for alli=0 to p/2do

2: for allj=0 to mdo

3: α=i×Δ1

4: β=j×Δ2

5: Calculate the vertices of the upper hemisphere using (81)-(83).

6: if avatar touches a superior plane then

7: Move the contact plane through the auxiliary normal force

8: Store data of the contact plane (based on p and m)

9: end if

10: Compute vertices of the lower hemisphere

11: if avatar touches an inferior plane then

12: Move the contact plane using (81)-(83) and the negative numbers on the zv axis.

13: Store data of the contact plane (based on p and m)

14: end if

15: end for

16: end for

17: if avatar touches a plane then

18: Reordering adjacent planes

19: end if

20: for alli=0 to p/2do

21: for allj=0 to mdo

22: Draw plane

23: end for

24: end for

References

  1. 1. Avgousti S, Christofouru EG, Panaydes AS, Voskarides S, Novales C, Nouaille L, et al. Medical telerobotic systems: Current status and future trends. Biomedical Engineering Online. 2016;15(96):1-44. DOI: 10.1186/s12938-016-0217-7
  2. 2. Ballantyne GH, Moll F. The da vinci telerobotic surgical system: The virtual operative field and telepresence surgery. The Surgical Clinics of North America. Dec 2003;6(83):1293-1304. DOI: 10.1016/S0039-6109(03)00164-6. PMID: 14712866
  3. 3. Basdogan C, De S, Kim J, Muniyandi M, Kim H, Srinivasan MA. Haptics in minimally invasive surgical simulation and training. IEEE Computer Graphics and Applications. 2004;24(2):56-64
  4. 4. Hannaford B, Rosen J, Friedman DW, King H, Roan P, Cheng L, et al. Raven-ii: An open platformfor surgical robotics research. IEEE Transactions on Biomedical Engineering. 2013;60(4):954-959
  5. 5. Dy M-C, Tagawa K, Hiromi TT, Masaru K. Collision detection and modeling of rigid and deformable objects in laparoscopic simulator. In: Proc. SPIE 9415, Medical Imaging 2015: Image-Guided Procedures, Robotic Interventions, and Modeling, 941525 1-6 (18 March 2015). Orlando, Florida, United States; 2015. DOI: 10.1117/12.2081344
  6. 6. Goertz, RC, Thompson WM. Electronically Controlled Manipulator. Nucleonics (US) Ceased Publication. 1954;12(11). DOI: 10.1146/annurev.ns.04.120154.002153
  7. 7. Hollis RL, Salcudean S, Abraham DW. Toward a tele-nanorobotic manipulation system with atomic scale force feedback and motion resolution. In: IEEE Proceedings on Micro Electro Mechanical Systems, An Investigation of Micro Structures, Sensors, Actuators, Machines and Robots. Napa Valley, CA, USA; 1990. pp. 115-119. DOI: 10.1109/MEMSYS.1990.110261
  8. 8. Khatib O, Yeh X, Brantner G, Soe B, Kim B, Ganguly S, et al. Ocean one: A robotic avatar for oceanic discovery. IEEE Robotics Automation Magazine. 2016;23(4):20-29
  9. 9. Kim K, Song H, Suh J, Lee J. A novel surgical manipulator with workspace conversion ability for telesurgery. IEEE/ASME Transactions on Mechatronics. 2013;18(1):200-211
  10. 10. Hansen T et al. Implementing force-feedback in a telesurgery environment, using parameter estimation. In: 2012 IEEE International Conference on Control Applications. Dubrovnik, Croatia; 2012. pp. 859-864. DOI: 10.1109/CCA.2012.6402708
  11. 11. Sutherland IE. The ultimate display. In: Proceedings of the IFIP Congress. Vol. 2. London: Macmillan and Co; 1965. pp. 506-509
  12. 12. Brooks FP, Ouh-Young M, Batter JJ, Jerome Kilpatrick P. Project gropehaptic displays for scientific visualization. SIGGRAPH Computer Graphics. 1990;24(4):177-185
  13. 13. Chu CP, Dani TH, Gadh R. Multimodal interface for a virtual reality based computer aided design system. In: Proceedings of International Conference on Robotics and Automation. Vol. 2. Albuquerque, NM, USA; 1997. pp. 1329-1334. DOI: 10.1109/ROBOT.1997.614321
  14. 14. Penn P, Petrie H, Colwell C, Kornbrot D, Furner S, Hardwick A. The haptic perception of texture in virtual environments: an investigation with two devices. In: Brewster S, Murray-Smith R, editors. Haptic Human-Computer Interaction. Haptic HCI 2000, Lecture Notes in Computer Science. Vol. 2058. Berlin, Heidelberg: Springer; 2001. DOI: 10.1007/3-540-44589-7_3
  15. 15. Hamza-Lup FG, Bogdan CM, Popovici DM, Costea OD. A survey of visuo-haptic simulation in surgical training. In: Proceedings on Mobile, Hybrid, and On-line Learning. Athens, Greece; 2011. pp. 57-62. ISBN: 978-1-61208-689-7
  16. 16. Massie T, Salisbury JK. The PHANTOM haptic interface: A device for probing virtual objects. In: American Society of Mechanical Engineers Symposium on Haptic Interfaces for Virtual Environment and Teleoperator Systems. Vol. 1. Chicago, IL: Semantic Scholar; 1994. pp. 295-301
  17. 17. Grange S, Conti F, Helmer P, Rouiller P, Baur C. Overview of the delta haptic device. In: Eurohaptics ‘01. Vol. 1. Birminghan, England: Semantic Scholar; 2001
  18. 18. Conti F, Khatib O. A new actuation approach for haptic interface design. The International Journal of Robotics Research. 2009;28(6):834-848
  19. 19. Finch M, Chi VL, Taylor RM, Falvo M, Washburn S, Superfine R. Surface modification tools in a virtual environment interface to a scanning probe microscope. In: Proc. Symposium on Interactive 3D Graphics, 13–24 New York, NY, USA, California, USA: Monterey; 1995
  20. 20. Bliss JP, Tidwell PD, Guest MA. The effectiveness of virtual reality for administering spatial navigation training to firefighters. Presence: Teleoperators and Virtual Environments. 1997;6(1):73-86
  21. 21. Rizzo A et al. Virtual environment applications in clinical neuropsychology. In: Proceedings IEEE Virtual Reality 2000 (Cat. No.00CB37048). New Brunswick, NJ, USA; 2000. pp. 63-70. DOI: 10.1109/VR.2000.840364
  22. 22. Ruspini DC, Kolarov K, Khatib O. The haptic display of complex graphical environments. In: SIGGRAPH 97 Computer Graphics Proceedings Anual Conference. Los Angeles, California; 1997. pp. 140-147
  23. 23. Salisbury K, Conti F, Barbagli F. Haptic rendering: Introductory concepts. IEEE Computer Graphics and Applications. 2004;14:24-32
  24. 24. Escobar-Castillejos D, Noguez J, Neri L, Magana A, Benes B. A review of simulators with haptic devices for medical training. Journal of Medical Systems. 2016;104(40):177-185
  25. 25. Duriez C, Dubois F, Kheddar A, Andriot C. Realistic haptic rendering of interacting deformable objects in virtual environments. IEEE Transactions on Visualization and Computer Graphics. 2006;12(1):1-12
  26. 26. Terzopoulus D, Platt J, Barr A, Fleischer K. Elastically deformable models. Computer Graphics. 1987;21(4):205-214
  27. 27. Sclaroff S, Pentland A. Generalized implicit functions for computer graphics. SIGGRAPH Computer Graphics. 1991;25(4):247-250
  28. 28. Minsky M, Ming O, Steele O, Brooks FP, Behensky M. Feeling and seeing: Issues in force display. SIGGRAPH Computer Graphics. 1990;24(2):235-241
  29. 29. Adams RJ, Hannaford B. Stable haptic interaction with virtual environments. IEEE Trans. on Robotics and Automation. 1999;15(3):465-474
  30. 30. Bayo E, Avello A. Singularity-free augmented lagrangian algorithms for constrained multibody dynamics. Nonlinear Dynamics. 1994;5(2):209-231
  31. 31. Zilles CB, Salisbury JK. A constraint-based god-object method for haptic display. In: Proceedings 1995 IEEE/RSJ International Conference on Intelligent Robots and Systems. Human Robot Interaction and Cooperative Robots. Vol. 3. Pittsburgh, PA, USA; 1995. pp. 146-151. DOI: 10.1109/IROS.1995.525876
  32. 32. Mavhash M, Hayward V. High-fidelity haptic synthesis of contact with deformable bodies. IEEE Computer Graphics and Applications. 2004;24(2):48-55
  33. 33. Castro-Díaz JD, Sánchez-Sánchez P, Gutiérrez-Giles A, Arteaga-Pérez MA, Pliego-Jiménez J. Experimental results for haptic interaction with virtual holonomic and nonholonomic constraints. IEEE Access. 2020;8:120959-120973
  34. 34. Monroy C, Kelly R, Arteaga M, Bugarin E. Remote visual Servoing of a robot manipulator via Internet2. Journal of Intelligent and Robotic Systems. 2007;49:171-187
  35. 35. Rodríguez A, Basañez L, Colgate JE, Faulring EL. Haptic display of dynamic systems subject to holonomic constraints. In: IEEE Int. Conference on Intelligent Robots and Systems. France: Nice; 2008
  36. 36. Montagnat J, Delignette H, Ayache N. A review of deformable surfaces: Topology, geometry and deformation. Image and Vision Computing. 2001;19:1023-1040
  37. 37. Heredia SA, Harada K, Padilla- Castaneda M, Marques-Marinho M, Márquez-Flores JA, Mitsuishi M. Virtual reality simulation of robotic transsphenoidal brain tumor resection: Evaluating dynamic motion scaling in a master-slave system. The International Journal of Medical Robotics and Computer Assisted Surgery. 2019;15(1):1-48. DOI: 10.1002/rcs.1953. Epub 2018 Oct 18. PMID: 30117272; PMCID: PMC658796
  38. 38. Maurel W, Wu Y, Magnenat N, Thalman D. Biomedichal Models for Soft Tissue Simulation. Heidelberg, Germany: Springer; 1998
  39. 39. Faure F, DuriezHervé C, Delingette H, Allard J, Gilles B, Marchesseau S, et al. SOFA: A multi-model framework for interactive physical simulation. In: Payan Y, editor. Soft Tissue Biomechanical Modeling for Computer Assisted Surgery. Studies in Mechanobiology, Tissue Engineering and Biomaterials. Vol. 11. Berlin, Heidelberg: Springer; 2012. DOI: 10.1007/8415_2012_125
  40. 40. Selig JM. Geometric Fundamentals of Robotics. New York, USA: Springer Science and Business; 1996
  41. 41. Xiaoping Y, Sarkar N. Unified formulation of robotics systems with holonomic and nonholonomic constraints. IEEE Transaction on Robotics and Automation. 1998;14:640-650
  42. 42. Murray RM, Li Z, Sastry SS. A Mathematical Introduction to Robotic Manipulation. Boca Raton, Florida, USA: CRC Press; 1994
  43. 43. Luca AD, Oriolo G. Modelling and control of nonholonomic mechanical systems. In: Angeles J, Kecskeméthy A, editors. Kinematics and Dynamics of Multi-Body Systems. CISM International Centre for Mechanical Sciences (Courses and Lectures). Vol. 360. Vienna: Springer; 1995. DOI: 10.1007/978-3-7091-4362-9_7
  44. 44. Faurling EL, Lynch KM, Colgate JE, Peshkin MA. Haptic display of constrained dynamic systems via admittance displays. IEEE Transactions on Robotics. 2007;23:101-111
  45. 45. Arteaga MA, Gutiérrez-Giles A, Pliego-Jiménez J. Bilateral teleoperation. In: Local Stability and Ultimate Boundedness in the Control of Robot Manipulators, Lecture Notes in Electrical Engineering. Vol. 798. Cham: Springer; 2022. DOI: 10.1007/978-3-030-85980-0_9
  46. 46. Garcia-Valdovinos LG, Parra-Vega V, Arteaga MA. Higher-order sliding mode impedance bilateral teleoperation with robust state estimation under constant unknown time delay. In: Proceedings, 2005 IEEE/ASME International Conference on Advanced Intelligent Mechatronics. Monterey, CA, USA. pp. 1293-1298. ISBN:0-7803-9047-4. DOI: 10.1109/AIM.2005.1511189
  47. 47. Arteaga MA. Tracking control of flexible robot arms with a nonlinear observer. Automatica. 2000;36(9):1329-1337
  48. 48. Arteaga MA, Castillo-Sánchez A, Parra-Vega V. Cartesian control of robots without dynamic model and observer design. Automatica. 2006;42(3):473-480
  49. 49. Gutiérrez-Giles A, Arteaga-Pérez MA. Transparent bilateral teleoperation interacting with unknown remote surfaces with a force/velocity observer design. International Journal of Control. 2019;92(4):840-857. DOI: 10.1080/00207179.2017.1371338
  50. 50. Gudiño Lau J, Arteaga MA. Dynamic model and simulation of cooperative robots: A case study. Robotica. 2005;23:615-624
  51. 51. Shreiner D, Sellers G, Kessenich J, Licea-Kane B. OpenGL Programming Guide Eight Edition, The Official Guide to Learning OpenGL. The Khronos OpenGL ARB Working Group. Ann Arbor, Michigan: Addison-Weasley; 2013. ISBN 978-0-321-77303-6
  52. 52. Zafer N, Yilmaz S. Nonlinear viscoelastic contact and deformation of freeform virtual surfaces. Advanced Robotics. 2016;30(4):246-257
  53. 53. Barbagli F, Salisbury K. The effect of sensor/actuator asymmetries in haptic interfaces. In: 11th Symposium on Haptic Interfaces for Virtual Environment and Teleoperator Systems, 2003. HAPTICS 2003. Proceedings. Los Angeles, CA, USA; 2003. pp. 140-147. ISBN:0-7695-1890-7. DOI: 10.1109/HAPTIC.2003.1191258
  54. 54. Yang C, Xie Y, Liu S, Sun D. Force modeling, identification, and feedback control of robot-assisted needle insertion: A survey of the literature. Sensors (Basel). 12 Feb 2018;18(2):1-48. DOI: 10.3390/s18020561. PMID: 29439539; PMCID: PMC5855056
  55. 55. Constantinescu D, Salcudean SE, Croft EA. Haptic rendering of rigid contacts using impulsive and penalty forces. IEEE Transactions on Robotics. 2005;21(3):309-323
  56. 56. Kim YJ, Otaduy MA, Lin MC, Manocha D. Six-degree-of-freedom haptic display using localized contact computations. In: Proceedings 10th Symposium on Haptic Interfaces for Virtual Environment and Teleoperator Systems. HAPTICS. Vol. 2002. Orlando, FL, USA; 2002. pp. 209-216. ISBN: 0-7695-1489-8. DOI: 10.1109/HAPTIC.2002.998960
  57. 57. Faurling EL, Lynch KM, Colgate JE, Peshkin MA. Haptic interaction with constrained dynamic systems. In: Proceedings of the 2005 IEEE International Conference on Robotics and Automation. Barcelona, Spain; 2005. pp. 2458-2464. ISSN: 1050-4729. DOI: 10.1109/ROBOT.2005.1570481
  58. 58. Webster RJ III, Seob J, Cowan NJ, Chirikjian GS, Okamura AM. Nonholonomic modeling of needle steering. The International Journal of Robotic Research. 2006;25(5–6):509-525. DOI: 10.1177/0278364906065388

Notes

  • In graphic computing, we refer texture to the feature of give color or combinations of colors to the objects.

Written By

Pablo Sánchez-Sánchez, José Daniel Castro-Díaz, Alejandro Gutiérrez-Giles and Javier Pliego-Jiménez

Reviewed: 06 January 2022 Published: 06 March 2022