Open access

Five Fingers Haptic Interface Robot HIRO: Design, Rendering, and Applications

Written By

Osama Halabi and Haruhisa Kawasaki

Published: 01 April 2010

DOI: 10.5772/8691

From the Edited Volume

Advances in Haptics

Edited by Mehrdad Hosseini Zadeh

Chapter metrics overview

3,454 Chapter Downloads

View Full Metrics

1. Introduction

Haptic interfaces are devices that allow human-machine interaction through force and touch. Combined haptic and visual interfaces have been around for decades. Yet, despite an enormous increase in research activity in the last few years, the science of haptics is still a technology in its infancy. The majority of commercially available haptic devices operate based on a principle of point interaction. Thus, contact between the operator and the simulated virtual environment occurs only at an arbitrary single point, typically the tip of a stylus or thimble used for interaction. Many literatures have identified the importance of multifinger display in the field of haptics ‎(Wall et al., 2001).

Haptic interfaces have been utilized in the area of tele-manipulation (Ivanisevic et al., 2000) (Dubey et al., 2001) (Elhajj et al., 2001) (Ando et al., 2001), interaction with micro/nano scale phenomenon (Guthold et al., 2000) (Marliere et al., 2004), medical training and evaluation (Basdogan et al., 2001) (Bardorfer et al., 2001) (Langrana et al., 1994). Multifinger haptic interface has a higher potential for the above mentioned applications than that of a single point haptic interface. A number of multifinger haptic interfaces (Kawasaki et al., 1993) (Ueda et al., 2004) (Walairacht et al., 2001) (Bouzit et al., 2002) (Yoshikawa et al., 2000) (CyberGlove Systems, 2009) have been developed. However, the issue of developing a haptic interface opposite to the human hand that reflects the force feeling to the finger tips was never addressed. The haptic interface is demanded to be safe, work in wide operation space, and able to present not only force at the contact points but also the weight feeling of the virtual objects. Also impose no oppressive feeling to the operator’s hand. Moreover, the haptic interface itself imposes no weight feeling. In this chapter the new developed five-fingers Haptic Interface RObot (HIRO II) based on HIRO (Kawasaki et al., 2003) (Alhalabi et al., 2004) that addresses these issues is introduced. HIRO II is a new haptic device that enables users to interact and feel the virtual object safely and it does not impose any weight on the users hand since it is not wearable device nor it is cumbersome. Two main important issues were in mind while designing HIRO II, multiple interaction points and reflecting forces directly to the human fingertips where other haptic devices failed to provide suitable solutions. In addition, user’s safety and ease of attachment issues have been also considered while designing HIRO II.

The first goal is to design a generic architecture that will support both haptic and graphic requirements by proposing a multi-layer architecture that can be used in a networked haptic environment to achieve shared virtual haptic interaction over the network.

It has been clearly shown that it is necessary to run the simulation and the graphics loops of virtual environments (VE) systems asynchronously in order to maintain reasonable display update rates (around 30Hz) in the presence of long simulation computation. Such a decoupling is even more critical for force display, where high update rate is required to produce high-quality forces ‎(Gossweiler et al., 1993) (Adachi et al., 1995) (Mark et al., 1996) ‎(Balaniuk, 1999).

We can decouple the simulation and haptic loops on a single machine by using multiple processes ‎(Adachi et al., 1995) (Balaniuk, 1999). This is the approach of most commercial devices. It is often more practical to dedicate one real-time machine to the haptic servo loop, and use other machine(s) for the rest of the virtual environment tasks (simulation, high-performance graphics, etc.). Our approach was to decouples haptic and graphics rendering, running each on a separate machine allowing a high degree of independence between haptics and graphics. This strategy allows each machine to be well matched to its task. It also allows for flexible system configuration, which is particularly useful in a research environment.

The second goal of this research is to utilize the new HIRO in application that specifically sensitive to multifinger interaction, such as medical palpation and surgery. Therefore, interacting with soft deformable object is necessary. Most high-precision techniques for deformable objects tend to be computationally expensive, making high control rates (> 1 KHz) difficult to reach. Different solutions proposed to solve this problem as will be discussed in details later section (Balaniuk, 1999) (Barbagli, 2002) (Mendoza, 2001) (Cavusoglu, 2000). Most of these methods deal with intermediate representation or local models in the haptic loop to obtain force at a high update. Contribution to the literature, however, to the best of our knowledge, concern only single-point contact. Multiple points of contact used in touching deformable objects must consider individual effect of each point on the other points and how the object should be deformed based on all fingers interaction. Using a local model is not a solution because the entire model, or sufficiently large independent part of it, must be dealt with to take into account the effects of each finger on the model and on the other fingers. To do so, we take an “elementary displacement” approach in finite element modeling.

To verify the presented methods and approaches, Future Haptic Science Encyclopedia was developed and presented as a new potential application that benefit greatly from the haptic research to achieve a new level of realism. Also, to fully demonstrate the potentiality of the new haptic interface and to verify the proposed architecture and techniques for force calculation and collision that specifically considered multipoint contact haptic interaction. Moreover, a physically-based modeling technique for displaying forces and deformations has been used to achieve interaction with relatively large deformable objects in real time, where it is computationally expensive and the haptic update rate may drop below the requirement.

Advertisement

2. HIRO-II Design

2.1. Basic Design

A multifinger haptic interface joined to an arm can provide a wide operation space. But, most of them are mounted on the back of the human hand like the CyberForce (CyberGlove Systems, 2009). Fixing the haptic interface to the hand gives oppressive feeling to the operator since the interface binds the human hand firmly. In order to reduce the oppressive feeling and increase safety, a three-finger haptic interface opposite to human hand has been presented by our group (Kawasaki et al., 2003). Later, we developed a new Haptic Interface RObot named HIRO II to present force feeling to all human hand fingers. Figure 1 shows the developed five-finger haptic interface, and how it is connected to the operator’s hand. The haptic interface consists of an interface arm, a haptic hand with five haptic fingers, and a controller. When the operator moves his/her hand, the haptic interface follows the motion of operator’s fingers and present the force feeling. The operator feels just a little oppressive feeling because the coupled part between the human hand and the haptic interface is only the fingertips of the operator.

Figure 1.

The Developed five-finger haptic interface: HIRO II.

2.2. Haptic hand

The haptic hand starts from the wrist but does not include it, and ends with the fingertips. This includes a hand base, and five haptic fingers as shown in Figure 2. The haptic fingers are designed to be similar to the human fingers in geometry and motion ability. Table 1 shows specifications of the haptic hand.

Number of fingers 5
Hand Degree of freedom 15 dof
Weight 0.73 Kgf
Degree of freedom 3 dof
Weight 0.13 Kgf
Finger Work space of thumb 713 cm 3
Work space of finger 535 cm 3
Output force 3.5 N (max)
Velocity 0.23 m/s (max)

Table 1.

Specifications of the haptic hand.

The developed haptic finger is shown in Figure 3. The design of the finger is based on an anthropomorphic robot hand, named Gifu Hand III (Mouri et al., 2002). Each finger has 3 joints allowing 3 DOF. The first joint allows abduction/adduction. The second joint allows flexion/extension. The third joint allows flexion/extension. All joints are driven by DC servomotors with gear transmissions and rotary encoders. The motion ranges of the 1st to the 3rd joints are –30 ~ 30, -25 ~ 94, and -10 ~ 114 [deg], respectively. Thumb is almost same as the finger except for a reduction gear ratio and movable ranges of joint 1 and joint 2. The motion ranges of the 1st to the 2nd joints of the thumb are –40 ~ 40 and -25 ~ 103 [deg]. Workspaces of thumb and hand are shown in Figure 4(a) and Figure 4(b). Volumes of the workspace of the thumb and fingers are 535 and 713 [mm3], respectively.

Figure 2.

Haptic hand design.

Figure 3.

Haptic finger.

Figure 4.

The workspaces of the thumb and the hand.

The thumb is designed to work in wide space because the workspace of the human’s thumb is larger than that of other fingers. Finger layout of the haptic hand is designed to maximize the product space between the workspace of the haptic hand and the human hand. Figure 5 shows an optimum pose of the haptic finger for the human index finger. The size is decided based on a practical statistics conducted on Japanese males. The lengths of distal, middle, and proximal phalanxes are 39, 20, and 26 [mm], respectively. The workspace of the human index finger and the haptic finger is 281 and 535 [cm3], respectively. The product space at the optimum pose of the haptic finger is 259 [cm3]. The allocation of fingers in haptic hand was designed taken in consideration the above geometrical relation.

In the second link of each finger, a 6-axes force sensor (NANO sensor by BL AUTOTEC. LTD.) is installed. The force sensor is used to control the human fingertip force. On the force sensor, a finger holder is mounted through a passive spherical joint attached to the force sensor by permanent magnet as shown in Figure 6. The operator inserts his/her fingertips into the finger-holders to operate the interface. The passive spherical joint attached by the magnet has two roles: First, the adjustment of differences between the human and haptic fingers orientations. Each human finger has 6 DOF and each haptic finger has 3 DOF, therefore, additional passive 3 DOF are needed. Second, to ensure the operator’s safety by enabling him/her to remove their fingers easily and quickly should instability or malfunctions occurs.

The maximum output torques of the 1st, 2nd, and 3rd joints are 0.81, 0.43, and 0.2 [Nm] respectively, which are equivalent to fingertip force of 6.5, 3.5, and 3.8 [N] respectively. The maximum velocities of the 1st joint to 3rd one are 1.9, 3.5, and 9.2 [rad/s] respectively, which are equivalent to fingertip velocities of 0.23, 0.43, and 0.48 [m/s] respectively. These specifications show that the haptic fingers can follow the human fingers motion in any task implementation.

Figure 5.

Optimum pose of the haptic finger for the human finger.

Figure 6.

Finger holder.

2.3. Interface arm

The interface arm is designed to be as close as possible to the human arm in geometry and motion ability as shown in Figure 7. The upper joint of the interface arm is the shoulder joint. The shoulder joint motion is simplified to 2 DOF because the contribution of the third DOF to the arm flexibility is too limited compared to the complexity of realizing it. The two possible DOF are the shoulder flexion/extension and shoulder adduction/abduction, and the neglected DOF is the shoulder radial/lateral rotation. The middle joint of the interface arm is the elbow joint with 1 DOF, which generates the elbow flexion/extension. The lower joint of the interface arm is the wrist joint with 3 DOF. The first DOF is the forearm supination/pronation, the second DOF is the wrist flexion/extension, and the third DOF is the wrist abduction/adduction. The interface arm has therefore 6 joints allowing 6 DOF. The interface arm has a similar size to the human arm, and its joints motion ranges are compatible with it.

The lengths of the upper arm and the forearm are 0.3 and 0.31 [m], respectively. The arm joints are actuated by AC servomotors equipped with rotary encoders and gear transmissions. The movement angles of the 1st to 6th joint ranges are (-180, 180), (0, 180), (-90, 55), (-180, 180), (-55, 55), and (-90, 90) [degrees]. Figure 8 shows the movement space and the workspace of the haptic arm. The workspace is about 400x800x300 [mm]. Table 2 shows the specifications of the arm. The weight of the whole mechanism is 40% reduced compared with HIRO (Kawasaki et al., 2003). Maximum output force is 45 N and maximum velocity at the haptic hand base is 0.4 [m/s].

Figure 7.

Arm Design.

Figure 8.

Workspace of the haptic arm.

Degree of freedom 6
Output force 45 N
Output moment 2.6 Nm (max)
Transnational velocity 0.4 m/s (max)
Rotational velocity 1.4 rad/s (max)
Weight 6.9 Kgf

Table 2.

Specifications of the haptic.

Haptic rendering

2.4. Multipoint haptic interaction

The goal of haptic rendering is to display the shape surface of arbitrary 3D objects in real time through a haptic interface. Initial haptic rendering methods focused on displaying object primitives. Later Zilles and Salisbury (Zilles & Salisbury, 1999) developed a more sophisticated constraint-based method to render generic polygonal meshes. Pervious methods are not sufficient to model the haptic interaction with more complex category of haptic interfaces like gloves and haptic hands. Popescu and his colleges (Popescu et al., 1999) used a haptic interaction mesh (a set of points used for haptic rendering) that accounts for the hand interaction model. The method provided better representation of finger haptic interaction but still can only covers a certain patch of the fingertip. The proposed haptic interaction technique considers the exact point of the fingertip that touched the virtual object and the orientation relative to the virtual object. For the sake of simplicity, we restrict ourselves to static environment composed of deformable objects and rigid hand. The haptic rendering was implemented by first checking for collision detection between the fingertips of the simulated fingers and the virtual objects. Since the finger is considered to be non-deformable, the fingertip’s displacement vector d components d x , d y , d z is measured between the “ideal haptic interface point” (IHIP) on the surface and the centroid of the collided polygon of each fingertip. The concept of calculating the displacement vector can be seen from Figure 9. This displacement vector is applied on the centroid of the collided polygon of the virtual deformable object and treated as the fingertip penetration distance (Alhalabi et al., 2004).

Figure 9.

Penetration distance calculation after collision detection.

2.5. Haptic rendering with deformable objects

A real-time dynamic analysis of force-reflecting deformable objects using finite-element techniques is quite difficult with the computational power available. In a Virtual Environment (VE) simulation of interaction with deformable bodies, most high-precision techniques for deformable objects tend to be computationally expensive, making it difficult to reach high control rates (> 1 KHz). It is not possible to increase the update rate of the physical model to the haptic rate with its full complexity due to computational limitations. Among the many solutions proposed to solve this problem, one applies the same force between model updates, or to low-pass filter this generated force to the bandwidth of the model update rate. These effectively reduce the haptic update rate to the visual update rate, impairing the fidelity of haptic interaction — a point especially grave when high frequency interaction forces are required, for example, in nonlinear contact phenomena.

Many solutions have been proposed based on the intermediate representation or local model representation concept, where the basic idea is to use a simple implicit function that approximates, to a good extent, a small part of the object being touched. More specifically such intermediate representation or local model represents that part of the object closest to the current position of the haptic interface. Past implementations of local models, typically god-object ‎(Zilles et al., 1996) or proxy ‎(Ruspini et al., 1997), have been implemented using different techniques. Theses approaches are typically used for single point of contact haptic interaction with rigid objects. Adachi et al. used a plane‎ that always tangent to the virtual object surface (Adachi et al., 1995), as an intermediate representation, as did Mark et al. in ‎(Mark et al., 1996). Balaniuk in (Balaniuk et al., 1999)‎ used a spherical local model. These two concepts — intermediate representation and the local model — have been adopted by many researchers for particular cases of deformable objects. Barbagli et al. (Barbagli et al., 2002)‎ presented techniques that based on the concept of local model adapted to deformable objects. ‎(Mendoza & Laugier, 2001) proposed a local model based on the topology of the virtual object being used by taking a set of facets where interaction is taken place around a long virtual probe. Cavasoglu and Tendick (Cavasoglu & Tendick, 2000) used a linearization of the non-linear deformable model to obtain a simpler model. None of these approaches has, however, fully exploited the entire model in haptic control, but dealt only with small representations of the model. All available literature is concerned only with single-point contact. When multiple points of contact are involved in touching deformable objects, the effect of individual points on the other points must be considered, as well as how the object is to be deformed based on all fingers interaction. Using a local model is not sufficient to solve this problem, because the entire model or a sufficiently large independent part of it must be dealt with to consider the effects of individual fingers on the model and on other fingers.

The physically based modeling technique we propose for displaying force and deformation involving interaction with multipoint contact overcomes the difficulty of real-time dynamic analysis of force-reflecting deformable objects using finite-element techniques, which is quit difficult given the available computational power. To do so, we take an “elementary displacement” approach in finite element modeling.

2.5.1. Elementary displacement approach

Tissues are in general heterogeneous, exhibiting non-linear, anisotropic elastic and viscous behavior. Combination models of this quickly becomes complex and requires a huge calculation core that tend to eliminate the possibility of real-time simulation. Solving this problem requires that we make a number of simplifications. We chose the linear elastic deformation model known as Hook’s law. Although this model is only valid for very small deformation and strain, it is widely used in surgical simulators, where it has been proven to provide satisfactory results.

Consider an elastic solid Ω in three-dimensional space χ = [ x , y , z ] with boundary conditions in terms of given force and displacement on the surface A. The strain energy of the solid is defined as:

E s t r a i n = 1 2 Ω ε T σ d x E1

where ε T = x  ε y  ε z   γ xy   γ xz   γ yz ] T is a strain vector. By defining a displacement vector U = [ u v w ] T , the strain vector is rewritten as ε = B U where

B = [ δ δ x 0 0 0 δ δ y 0 0 0 δ δ z δ δ y δ δ x 0 δ δ z 0 δ δ x 0 δ δ z δ δ y ] E2

Stress vector σ is related to the strain vector through Hook’s law by σ = D ε . Matrix D reads as:

D = E ( 1 + ν ) ( 1 2 ν ) [ 1 ν ν ν 0 0 0 ν 1 ν ν 0 0 0 ν ν 1 ν 0 0 0 0 0 0 1 2 ν 2 0 0 0 0 0 0 1 2 ν 2 0 0 0 0 0 0 1 2 ν 2 ] E3

where E is Young’s modulus of elasticity and υ is the Poisson ratio.

Using these relations, we rewrite strain energy and add work done by given external force f to yield the potential energy function:

Π ( U ) = 1 2 Ω U T B T D B U d x A f T U d x E4

After discretizing solid Ω by volumetric finite elements and equating the first variation of the potential energy to zero we arrive to a static equilibrium equation of the discrete element:

0 = Ω e B e T D B e U e d v f e = K e U e f e E5

where matrix B e depends on the type of chosen finite element. Then we assemble global matrices from already obtained element matrices in Eq. (5) describing the solid Ω by a large sparse system of linear equations K U = F .

For haptics, we are interested only in visible nodes deformations and interaction with contact nodes, so we need not to consider the motion of nodes that are not visible, i.e., internal, or forces applied at nodes that are not in direct contact with the fingertips. We tried several different solvers and approaches, including model matrices condensation, the use of pre-calculated stiffness matrix inverse updates, and found that they are too slow for our desired discretization level. Very good performance has been achieved when implementing “the elementary displacement” approach, which takes the advantage of the model linearity where the principle of superposition is valid. In off-line calculation, all possible loading cases for elementary (usually unitary) displacements are examined; calculated deformation and equivalent force at controlled points are saved to files. Data about deformation is stored as a set of 3 × 3 [ T n k d ] tensors, expressing the relation between the displacement of node n and elementary displacement d k = [ d x k   d y k   d z k ] imposed at node k. In the same way the components of elementary equivalent force are stored in 3 × 3 [ T k f ] tensors. Since only surface nodes are considered, the amount of stored data is dramatically reduced.

In real-time calculation, pre-calculated results are scaled based on robot fingertips movement and added. Displacement d n of model node n, for example, induced by constraint d k * applied to node k is obtained by the following linear equation:

d n = [ T nk d ] | d k * d k | E6

for any node k n , where | d k * d k | denotes component-by-component vector division. HIRO II has five points of contact, so more than one node is in touch at the same time and more than one node could be moved. In this case the total displacement of a node is the sum of all displacements induced by touched nodes m:

d n = l = 1 m [ T n k l d ] [ d ˜ k l ] E7

where [ d ˜ k l ] is corrected displacement of the touched node (Cotin et al., 1999). The force associated with the touched node k is determined as:

f n = [ T n k f ] [ d ˜ k ] E8

The finite element method in combination with the “elementary displacement” approach is successfully implemented and tested in breast palpation application (Alhalabi et al., 2005) (Daniulaitis et al., 2004). We are currently dealing with models, containing from 500 to 600 surface nodes, simulated at above 1600 Hz frequency. Further refinement of the mesh and increasing node number do not affect significantly the sense of touch, but dramatically increases the memory required to store elementary displacement and force, which makes simulation impractical.

2.5.2. Elementary displacement approach evaluation

We have carefully tested and compared the elementary displacement approach with different approaches to confirm its performance and ensure that it can generates realistic force with high update exceeding 1 KHz required to produce force in the haptic loop. Figure 10 shows the relationship between node number and calculation frequency for three approaches. Clearly the elementary displacement approach is superior to other approaches and calculates force at high frequency.

Figure 10.

Frequency-node number curve.

Applications

2.6. Future Haptic Science Encyclopedia (FHSE)

We developed a future haptic science encyclopedia (FHSE) to explore the high potential and usability of HIRO II, while testing the proposed architecture and verifying force and deformation calculation. The FHSE enables users to experience many different virtual worlds such as the scientific, historical, and astronomical worlds. Three kinds of encyclopedia — "Planets in the solar system", "the Dinosaur's world", and "the Microscopic world" — were implemented and tested as details in the next section. The interesting feature of FHSE is that it is considered a complete VR interface where the user only uses HIRO II to interact and move between and around scenes. Controls introduced in this system include virtual buttons, levers, and arrows without the need to use devices such as joysticks and keyboards (Figure 11). These controls reflect force similar to that induced by the actual control (switches, buttons, and levers), making interaction easier, realistic, and enjoyable. A message board for users explains the current scene and the next action to take. Sound guidance conveying most of the content of displayed messages is also provided to make it easier for users to concentrate on the message board during interaction, so both visual and audio messages are available to supply the adequate information at any time of interaction. This gives participants more confidence to move around without outside help and the ability to adapt to the system quickly.

Figure 11.

Selection by control levers, buttons, and arrows.

2.6.1. Hardware setup

The system consists of VR station PC (graphic station), where the VR simulation is running and control PC for controlling the haptic interface. The VR station PC is 2.8 GHz equipped with 1.5 GB memory and high performance stereoscopic graphic card (NVIDIA quadro FX1300) where all running on windows 2000. The control PC for controlling the haptic interface is a real time OS (ART-Linux) to ensure 1 KHz update rate of haptic and control thread. To provide better sense of immersion, the operator asked to wear HMD (Head Mounted Display). Since the whole system was to be exhibited at the World Expo 2005. It was very important to provide 3D graphics not only to the user but also to the whole audiences to enable them to share and enjoy the demonstration with the subject. Therefore a large screen was chosen to display the stereoscopic scenes. The audiences wear polarized glasses which is very light and easy to wear. The system architecture is shown in Figure 12.

Figure 12.

The system architecture.

2.6.2. The solar system

In this world, users experience the effect of different gravity conditions between the Moon and Mars on ball or weight by lifting virtual objects and feeling differences in weight and dynamics. Figure 13 shows the ball grasped and lifted on the Moon. Physical laws of motion are applied to enable five-finger grasping, and to calculate correct force such as gravity, inertia, and friction.

Figure 13.

Ball grasped and lifted on the Moon.

2.6.3. The dinosaur’s world

This world provides a unique experience by taking users to a mysterious vanished world and letting them touch and feel dinosaurs. A 5-ton triceratops 8 meters long is simulated divided into three touchable parts — head, tail, and leg. Each part is a deformable model implemented using physical modeling as described in the previous section. The physical model has been scaled down so that the hand can easily reach and touch it (Figure 14).

Figure 14.

Tail deformed according to the user’s fingers interaction.

2.6.4. The microscopic world

In this simulation, users explore the micro world of one-celled called Daphnia longispina 1.5 mm (Figure 15).

Figure 15.

Interaction with Daphnia horn.

Users touch either the horn or the belly of the Daphnia through a selection screen. Physical models have been scaled up so that each part can be touched easily. Animation reflecting real movement of the Daphnia is also simulated.

2.7. Experimental results

World Expo 2005 was a great chance to demonstrate HIRO II and FHSE. Thousands of people had the chance to watch demonstrations, and many had the opportunity to try it. Participants were asked to navigate around the FHSE freely by choosing the world they wanted to explore. No time limit was imposed on each trial or in each world, although, a maximum 10 minutes was set for those trying out the FHSE. Subjects were asked to fill out a questionnaire at the end of their experience. The questionnaire recorded data on HIRO II and FHSE. Data presented here is based on results for 30 subjects from different backgrounds and of different aged. There were sufficient variations in age that questionnaire reflected a wide spectrum of subjects.

2.7.1. Human performance

To facilitate effective, efficient, and safe operation of any VE system, the measurement of human performance should be a consideration throughout the system’s life cycle. Subjective and objective evaluations were conducted. Since we are presenting FHSE as a complete VE interface in which making selections and moving between scenes is achieved by what we called controls, it is important to evaluate these controls in providing a suitable approach for users to interact inside the VE. The subjective evaluation was based on three factors—ease of use of controls in selection and transition between scenes, naturalness of control manipulation, and measuring how far subjects were satisfied with such interaction. These were measured on a scale of 1-7 scale. Results showed that books and buttons were easy and natural to use and subjects were very satisfied with them (Figure 16), followed by arrows, then levers, which presented some difficulty in manipulation. Table 3 shows the samples, mean, and standard deviation for each control based on individual factors.

Figure 16.

Subjective evaluation of controls.

Arrow Book Button Lever
Count Mean Std Devi- ation Count Mean Std Devi- ation Count Mean Std Devi- ation Count Mean Std Devi- ation
Easy 5 4.40 2.41 5 6.40 .55 5 5.60 1.14 5 3.40 1.67
Natural 5 4.40 1.52 5 5.80 .45 5 5.20 .45 5 4.20 1.10
Satisfaction 5 4.40 1.14 5 5.40 .89 5 5.40 .55 5 3.20 1.64

Table 3.

The number of samples, mean, and standard deviation for each control based on individual factors.

The objective evaluation is based on an errors factor. When subjects tried to hit a control and missed, it was considered an error. Figure 17 shows the error evaluation based on misses. Errors were mostly committed in hitting the lever, with few errors in hitting other controls. The reason is due to the need for proper location perception of the lever in 3D space, because it must be pushed from one side—something that not all subjects could easily perceive the right location from the first time. This was also reflected by objective results. Nevertheless, providing the same scenario for all subjects in which they had to hit each control more than 10 times, it is clear that the arrow, book, and button were satisfactory, but the lever requires modification to be used more intuitively. Table 4 shows the Mean and Standard Deviation of error for each control.

Figure 17.

Control error (bars show means).

Count Mean Std Deviation
Arrow 5 1.40 1.52
Book 5 .60 .89
Button 5 .40 .89
Lever 5 3.60 1.14

Table 4.

The number of samples, mean, and standard deviation of error for each control.

Figure 18.

Overall FHSE simulation evaluation.

2.7.2. Overall evaluation

Only 13.8% of subjects were not satisfied with the Future Science Encyclopedia, while 55.2% were satisfied, 27.6% fairly satisfied, and 3.4% very satisfied (Figure 18). Interestingly for the FHSE, most subjects were very interested in the microscopic world, which was tried by over 51.1%. While, 40.4% expressed interest in the dinosaur world, but only 8.5 % were interested in the solar system.

Advertisement

3. Summary

Multifinger haptic interface is the ultimate human haptic interface that would provide natural manipulation and feeling to the tip of each finger, same as the real hand interaction with the real world in daily-life tasks. This chapter introduced the second version of HIRO, including the hardware design, the control algorithm, and the software design including the system architecture, haptic rendering, and practical VR application. The hand with the arm provides a large workspace and reflects exactly the human hand movement as you are looking to your hand and arm in the mirror, with some differences related to hardware limitations and control methods that yet need to be improved.

A comprehensive haptic system through demonstration of the future haptic science encyclopedia is presented. The architecture used was a multi-layer network framework that decoupled the haptic from graphic on to separate machines. The architecture maintained the high update rate needed for the haptic servoloop. Also, an accurate collision detection algorithm that considers the shape of human fingertips is developed. A physically based modeling using the finite element method (FEM) combined with an elementary displacement approach proved to ensure stable interaction and realistic forces calculation for a relatively complex deformable model in real time. The approach takes into considerations interaction with multipoint contacts where other approaches failed to achieve such interaction due difficulty in calculating forces at 1 KHz.

The FHSE application was demonstration at World Expo 2005. People enjoyed the demonstration, where the majority were satisfied with the system as it provided a unique experience.

Finally, combining technologies such as virtual reality and haptic robotics enables us to extend the capabilities and effectiveness of simulators. We anticipate that the proposed system will bring the notice of many researchers to the feasibility of implementing multifinger haptic interfaces. Although, every one agree on the importance of such interface, but many obstacles are still preventing the advancement of this research field. This chapter provided a complete picture on how this could be achieved. The greatest lesson of such project was the collaboration between robotics community and virtual reality community is necessary to achieve a better and comprehensive sophisticated haptic robot. Eventhough, I was the leader of the software development side including the haptic rendering and VR application, I was involved from the beginning in the process of designing the hardware and the control algorithms as well, useful feedback was the result of such collaboration.

There are a number of future directions that can apply to this research. First, it is desirable to test the presented architecture in the presence of large delay and apply some techniques to compensate it. Second, develop more robust control algorithm for the arm movements to reflect the exact motion of the human hand, this will make the interface manipulation very natural. Also, make the hardware itself smarter and safer.

References

  1. 1. Adachi Y. Kumano T. Ogino K. 1995 Intermediate representation for stiff virtual objects, Proceedings of IEEE Virtual Reality Annual Intl. Symposium, Research Triangle Park, N. Carolina; 203 210 , 1995
  2. 2. Alhalabi M. O. Daniulaitis V. Kawasaki H. Hori T. Medical Training Simulation for Palpation of Subsurface Tumor Using HIRO. Proceedings of worldHAPTICS, Italy, 623 624 623624 2005
  3. 3. Alhalabi M. O. Danyilaitis V. Kawasaki H. Tanaka Y. Hori T. 2004 Haptic Interaction Rendering Technique for HIRO: an Opposite Human Hand Haptic Interface Robot, Proceedings of EuroHaptics, 459 462 , Germany, 2004
  4. 4. Ando N. Korondi P. Hashimoto H. 2001 Development of Micromanipulator and Haptic Interface for Networked Micromanipulation, IEEE/ASME Trans. on Mechatronics, 6 4 417 427 , 2001
  5. 5. Balaniuk R. 1999 Using fast local modeling to buffer haptic data, Proceedings of Fourth PHANTOM Users Group Workshop-PUG99, 1999
  6. 6. Barbagli F. Prattichizzo D. Salisbury K. 2002 Multirate analysis of haptic interaction stability with deformable objects, Proceedings of 41st IEEE conference on Decision and Control, 917 922 , 2002
  7. 7. Bardorfer A. Munih M. Zupan A. Primozic A. 2001 Upper Limb Motion Analysis Using Haptic Interface, IEEE/ASME Trans. on Mechatronics, 6 3 253 260 , 2001
  8. 8. Basdogan C. Ho C. H. Srinivasan M. A. 2001 Virtual Environments for Medical Training: Graphical and Haptic Simulation of Laparoscopic Common Bile Duct Exploration, IEEE/ASME Trans. on Mechatronics, 6 3 269 286 , 2001
  9. 9. Bouzit M. Burdea G. Popescu G. Boian R. 2002 The Rutgers Master II- New Design Force-Feed back Glove, IEEE/ASME Trans. on Mechatronics, 7 2 256 263 , 2002
  10. 10. Cavusoglu M. C. Tendick F. 2000 Multirate simulation for high-fidelity haptic interaction with deformable objects in virtual environments, Proceedings of IEEE Int. Conf. Robot. Autom, 2458 2464 , 2000
  11. 11. Cotin S. Delingette H. Ayache N. 1999 Real-Time Elastic Deformations of Soft Tissues for Surgery Simulation, IEEE Transactions on Visualization and Computer Graphics, 5 (1), 62 73 , 1999
  12. 12. CyberGlove Systems, 2009 http://www.cyberglovesystems.com/products/hardware/ cyberforce.php, Retrived August, 2009
  13. 13. Daniulaitis V. Alhalabi M. O. Kawasaki H. Tanaka Y. Hori T. 2004 Medical palpation of deformable tissue using physics-based model for Haptic Interface RObot (HIRO), Proceedings of IROS2004, Japan, 3907 3911 , 2004
  14. 14. Dubey R. V. Everett S. E. Pernalete N. Manocha K. A. 2001 Teleoperation Assistance Through Variable Velocity Mapping, IEEE Trans. on Robotics and Automation, 17 5 761 766 , 2001
  15. 15. Elhajj I. Xi N. Fung W. K. Liu Y. H. Li W. J. Kaga T. Fukuda T. 2001 Haptic Information in Internet-Based Teleoperation, IEEE/ASME Trans. on Mechatronics, 6 3 295 304 , 2001
  16. 16. Gossweiler R. Long C. Koga S. Pausch R. 1993 DIVER: A Distributed Virtual Environment Research Platform, Proceedings of IEEE1993 Symposium on Research Frontiers in Virtual Reality, San Jose, Calif., 10 15 , Oct. 25-26, 1993
  17. 17. Guthold M. et al. 2000 Controlled Manipulation of Molecular Samples with the Nanomanipulator, IEEE/ASME Trans. on Mechatronics, 5 2 189 198 , 2000
  18. 18. Ivanisevic I. Lumelsky V. J. 2000 Configuration Space as a Means for Augmenting Human Performance in Teleoperation Tasks, IEEE Trans. on SMC, Part B, 30 3 471 484 , 2000
  19. 19. Kawasaki H. Hayashi T. 1993 Force Feedback Glove for Manipulation of Virtual Objects, Jour. Of Robotics and Mechatronics, 5 1 79 84 , 1993
  20. 20. Kawasaki H. Takai J. Tanaka Y. Mrad C. Mouri T. 2003 Control of Multi-Fingered Haptic Interface Opposite to Human Hand, Proceeding of the 2003 IEEE/RSJ Int. Conf. on Intelligent Robots and Systems (IROS2003), Las Vegas, 2707 2712 , 2003
  21. 21. Langrana N. A. Burdea G. Lange K. Gomez D. Deshpande S. 1994 Dynamic force feedback in a virtual knee palpation, Artificial Intelligence in Medicine, 6 321 333 , 1994
  22. 22. Mark W. Randolph S. Finch M. Van Verth J. Taylor R. 1996 Adding force feedback to graphics systems: Issues and solutions, Proceedings of SIGGRAPH’96, 447 452 , 1996
  23. 23. Marliere S. Urma D. Florens J. Marchi F. 2004 Mult-sensorial interactio with a nano-scale phenomenon: the force curve, Proceeding of EuroHaptics, 246 252 , 2004
  24. 24. Mendoza C. Laugier C. 2001 Realistic haptic rendering for highly deformable virtual objects, Proceedings of Virtual Reality Conference, 264 269 , 2001
  25. 25. Mouri T. Kawasaki H. Yoshikawa K. Takai J. Ito S. 2002 Anthropomorphic Robot Hand: Gifu Hand III, Proceedings of Int. Conf. ICCAS2002, Korea, 1288 1293 , 2002
  26. 26. Popescu V. Burdea G. Bouzit M. 1999 Virtual Reality Simulation Modeling for a Haptic Glove, Proceedings of Computer Animation’99 Conference, Geneva, Switzerland, 195 200 , 1999
  27. 27. Ruspini D. C. Kolarov K. Khatib O. 1997 The haptic display of complex graphical environments. Proceedings of SIGGRAPH conference, ACM SIGGRAPH, 345 352 , 1997
  28. 28. Ueda Y. Maeno T. 2004 Development of a Mouse-Shaped Haptic Device with Multiple Finger Inputs, Proceeding of Int. Conf. on Intelligent Robots and Systems, 2886 2891 , 2004
  29. 29. Walairacht S. Ishii M. Koike Y. Sato M. 2001 Two-Handed Multi-Fingers String-based Haptic Interface Device, IEICE Trans. on Information and Systems, E84D 3 365 373 , 2001
  30. 30. Wall S. A. Harwin W. S. 2001 Design of a Multiple Contact Point Haptic Interface, Proceedings of Eurohaptics 2001, University of Birmingham; 2001
  31. 31. Yoshikawa T. Nagara A. 2000 Devevopment and Control of Touch and Force Display Devices for Haptic Interface, Proceedings of SYROCO, 427 432 , 2000
  32. 32. Zilles C. Salisbury J. 1996 A constraint-based god-object method for haptic display. Proceedings of IEE/RSJ International Conference on Intelligent Robots and Systems, Human Robot Interaction and Cooperative Robots, 3 146-151. 1996
  33. 33. Zilles C. Salisbury K. 1995 A Constraint-based God-object Method For Haptic Display. Proceedings of IEEE International Conference on Intelligent Robots and Systems’95, Pittsburgh, PA, 3 146 151 , 1995

Written By

Osama Halabi and Haruhisa Kawasaki

Published: 01 April 2010