Open access

Design of a Bio-Inspired 3D Orientation Coordinate System and Application in Robotised Tele-Sonography

Written By

Courreges Fabien

Submitted: 07 October 2010 Published: 09 June 2011

DOI: 10.5772/16264

From the Edited Volume

Robot Arms

Edited by Satoru Goto

Chapter metrics overview

3,643 Chapter Downloads

View Full Metrics

1. Introduction

In designing a dedicated robotised telemanipulation system, the first approach should be to analyse the task targeted by such a teleoperation system. This analysis is essential to obtain cues for the robot mechanical, human-system interface, and the teleoperation control designs. In this chapter we will focus mainly on orientation-based tasks. That is to say, tasks consisting in orienting the remote robot’s end-effector in 3D space. One major application considered here is the robotised telesonography medical examination. In this application a medical expert can pilot the orientation of an ultrasound (US) probe to scan a remote patient in real-time by means of a robot arm handling the probe. We have focused our approach on the telesonography application in order to analyse the task of setting the orientation of an object in space around a fixed centre of motion. For this analysis, several points of view have been taken into account: perceptual and psychophysical analysis, experimental tracking of the orientation applied by the hand, and the analysis of medical sonography practices recommendations. From these studies we have developed a new frame of three angles enabling the definition of an orientation. Indeed to define an orientation in 3D space (also said attitude), a representation system with at least three degrees of freedom or coordinates is required. This new frame was designed in such a way that its three degrees of freedom are decoupled with respect to the human psychophysical abilities. That is to say that each angle of this frame can be easily assessed and varied by hand without changing the value of the other angles of the frame. Hence the so-called hand-eye coordination can be improved with such a system of representation for interfaces design. We name this new system “H-angles” where the H recalls the Human-centred design of this system. We will also show that standard rotation coordinate systems such as the Euler and quaternions systems cannot offer such properties. Thereby our new frame of angles can lead to several applications in the field of telerobotics. Indeed we will provide cues indicating that the considerations used to design our new frame of angles are not limited to the context of the telesonography application. This chapter is devoted to present the foundations which led to the design of a new bio-inspired frame of angles for attitude description but we will also present one major application of this frame of angles such as the design of a mouse-based teleoperation interface to pilot the 3D orientation of the remote robot’s hand-effector. This main application has arisen from the fact that the task of orienting an object in 3D space by means of a computing system requires the use of specific man-machine interfaces to be achieved fast and easily. Such interfaces often require the use of sophisticated and costly technologies to sense the orientation of the user’s hand handling the interface. The fields of activity concerned are not limited to robot telemanipulation; we also find the computer-aided-design, the interaction with virtual reality scenes, and teleoperation of manufacturing machines. When the targeted applications are related to the welfare of the whole society, such as medical applications, the cost and availability of the system raises the problem of fair access to those high-tech devices, which is an ethical issue. The proposed system of angles enables the development of methods to perform advanced telemanipulation orientation tasks of a robot arm by means of low-cost interfaces and infrastructures (except probably the robot). Thus the most expensive element in such a teleoperation scheme will remain the robot. But for a networked-robot accessible to multiple users, we can imagine that the bundle of its cost could be divided up among the several users. In this chapter we will show a new method for using a standard wheeled IT mouse to pilot the 3D orientation of a robot’s end-effector in an ergonomic fashion by means of the H-angles. In the context of the telesonography application, we will show how to use the aforementioned method to teleoperate the orientation of a remote medical ultrasound scanning robot with a mouse. The remaining of the chapter will be structured as follows: the second section coming next will provide our analysis in three parts to derive some cues and specifications for the design of a new frame of angles adapted to human psychophysical abilities. The third section is dedicated to our approach relying on the preceding cues to derive a new frame of angles for attitude description. It will also be shown that the new proposed system exhibits a much stronger improvement of decorrelation among its degrees of freedom (DOF) compared to the ZXZ Euler system. An analysis of the singularities of the new system is also proposed. The fourth section will address our first application of the new frame of angles that is to say the setting of 3D rotations with the IT mouse; this section will start with a review of the state-of-the-art techniques in the field and will end with experimental psychophysical results given in the context of the telesonography application. We show the large superiority of our frame of angles compared to the standard ZXZ Euler system. Last section concludes with an overview of further applications and research opportunities.

Advertisement

2. Design and analysis of a psychophysically adapted frame of angles for orientation description

The sensorimotor process of a human adult for achieving the task of orienting a rod by hand can be modelled according to the following simplified scheme from perception to action: This figure is a simplified scheme and may be incomplete but it reflects the present common trend of thought in the field of neuroscience concerning the information encoding and transformation from perception to action.

As it is reported in the neuroscience literature, the human brain can resort to several reference frames for perceptual modalities and action planning (Desmurget et al., 1998). Moreover, according to Goodale (Goodale et al., 1996), Human separable visual systems for perception and action imply that the structure of an object in a perceptual space may not be the same one in an interactive space which implies some coordinates frames transformations. This figure proposes the integration of multimodal information in the sensorimotor cortex to generate a movement plan into one common reference frame.

Figure 1.

Simplified Human perception to action process

This concept comes from neurophysiological evidences reported by Cohen and Andersen (Y.E. Cohen & Andersen, 2002). Some research works (Paillard, 1987) also report the existence of two parallel information processing channels: cognitive and sensorimotor, which is reflected in figure 1. The idea of perception as action-dependent has been particularly emphasized by motor theories of perception, i.e. those approaches claiming that perceptual content depends in an essential way on the joint contribution of sensory and motor determinations (Sheerer, 1984). The theory underlies that action and perception are not independent cognitive domains and that perception is constitutively shaped by action. This idea is accounted in figure 1 by considering that motor variations are programmed in several frames of reference associated with each perceptual channel. Likewise, an inverse kinematics model learned by trials and errors in the infancy has been shown to be implemented by the central nervous system for the motor control (Miall & Wolpert, 1996). As depicted by figure 1, the task of handling a rod and making it rotate in space about a fixed centre of motion involves three perceptual modalities: visual, haptic, and kinaesthetic proprioception. The meaning of visual perception is unambiguous and this modality is essential for a precise motor control (Norman, 2002). The haptic modality involved here should be understood as “active touch” as defined by Gentaz (Gentaz et al., 2008): “Haptic perception (or active touch) results from the stimulation of the mechanoreceptors in skin, muscles, tendons and joints generated by the manual exploration of an object in space… Haptic perception allows us, for example, to identify an object, or one of its features like its size, shape or weight, the position of its handle or the material of which it is made. A fundamental characteristic of the haptic system is that it depends on contact”. Haptics is a perceptual system, mediated by two afferent subsystems, cutaneous and kinaesthetic. Hence this perceptual system depends on spatio-temporal integration of the kinesthetics and tactile inputs to build a representation of the stimulus that most typically involves active manual exploration. The purely kinaesthetic proprioceptive perceptual system is a neurosensorial system providing the ability to sense kinaesthetic information pertaining to stimuli originating from within the body itself even if the subject is blindfolded. More precisely kinaesthetic proprioception is the subconscious sensation of body and limb movement with required effort along with unconscious perception of spatial orientation and position of body and limbs in relation to each other. Information of this perceptual system is obtained from non-visual and non-tactile sensory input such as muscle spindles and joint capsules or the sensory receptors activated during muscular activity and also the somato-vestibular system. Our aim in this section is to present our methodology to design an orientation frame comprehensible for both perception and action in performing a task of 3D orientation. We want a new frame of parameters whose values can be easily assessed from a perceived orientation, and easily set in orienting a rod by hand. Our approach was to seek for a system exhibiting three independent and decoupled coordinates when humans perform a planned trajectory in rotating a rod about a fixed centre of motion. For that purpose we have carried out an analysis in three parts given below. Before tackling this analysis we will provide some background and notations on orientation coordinate systems such as the quaternions and Euler angles.

2.1. Background on standard orientation coordinate system

We give in this section an insight on the most frequently used orientation representation systems in the field of human-machine interaction, namely quaternions and Euler systems.

2.1.1. The quaternions

The quaternions were discovered by Hamilton (Hamilton, 1843) who intended to extend the properties of the complex numbers to ease the description of rotations in 3D. A quaternion is a 4-tuple of real numbers related to the rotation angle and the rotation axis coordinates. Quaternions are free of mathematical singularities and enable simple and computationally efficient implementations for well-conditioned numerical algorithm to solve orientation problems. Quaternions constitute a strong formalization tool however it is not a so efficient mean to perform precise mental rotations. Quaternions find many applications especially in the field of computer graphics where they are convenient for animating rotation trajectories because they offer the possibility to parameterize smooth interpolation curves in SO(3) (the group of rotations in 3D space) (Shoemake, 1985).

2.1.2. Euler angles

Euler angles are intuitive to interpret and visualize and that’s why that they are still widely used today. Such a factorization of the orientation aids in analyzing and describing the different postures of the human body. An important problem with using Euler angles is due to an apparent strength, it is a minimal representation (three numbers for three degrees of freedom). However all minimal parameterizations of SO(3) suffer from a coordinates singularity which results in a loss of a rotational degree of freedom in the representation also known as “gimbal lock”. Any interpolation scheme based on treating the angles as a vector and using the convex sum will behave badly due to the inherent coupling that exists in the Euler angles near the singularity. Euler angles represent an orientation as a series of three sequential rotations from an initial frame. Each rotation is defined by an angle and a single axis of rotation chosen among the axes of the previously transformed frame. Consequently there are as many as twelve different sequences and each defines a different set of Euler angles. The naming of a set of Euler angles consists in giving the sequence of three successive rotation axes. For instance XYZ, ZXZ,… The sequences where each axis appears once and only once such as XYZ, XZY, YXZ, YZX, ZXY, ZYX are also named Cardan angles. In particular the angles of the sequence XYZ are also named roll (rotation about the x-axis), pitch (new y-axis) and yaw (new z-axis). The six remaining sequences are called proper Euler angles. In the present work it will be given a particular focus on the sequence ZXZ whose corresponding angles constitute the three-tuple noted (ψ,θ,φ). Angle ψ is called precession (first rotation about Z-axis), θ is the nutation (rotation about the new X-axis) and φ is named self-rotation (last rotation about the new Z-axis).

2.2. Neuroscience literature review

This section is dedicated to providing a comprehensive review of the neuroscience literature related to our purpose of identifying the 3D orientation encoding in the perceptual and sensory-motor systems. As indicated in figure 1 we have to investigate the orientation encoding in the following three perceptual systems: visual, haptic and proprioceptive. But we also have to consider the cognitive and the motor levels since Wang (Wang et al. 1998) argue that an interface design should not only accommodate the perceptual structure of the task and control structure of the input device, but also the structure of motor control systems. In the following the perceptual abilities (vision, haptic, proprioception) along with the mental cognition and motor control system will be indifferently denoted as modalities. We shall at first identify a common reference frame for all the modalities.

2.2.1. Common cross-modalities reference frame for orientations

As indicated previously, the reference frame may vary from one perceptual modality to another (Desmurget et al. 1998). Furthermore numerous studies have reported that for each modality its reference frame can be plastic and adapted to the task to be performed leading to conclude that several encodings of the same object coexist simultaneously. Importantly, the framework of multiple interacting reference frames is considered to be a general principle in the way the brain transforms combines and compares spatial representations (Y.E. Cohen & Andersen, 2002). In particular the reference frame can swap to be either egocentric (intrinsic or attached to the body) or allocentric (extrinsic to the body). This duality has been observed for the haptic modality (Volcic & Kappers 2008), the visual perception (Gentaz & Ballaz, 2000), the kinaesthetic proprioception (Darling & Hondzinski, 1999), the mental representation (Burgess, 2006) and the motor planning (Fisher et al., 2007; Soechting & Flanders, 1995). It is now a common opinion that both egocentric and allocentric reference frames coexist to locate the position and orientation of a target. In most of the research work it was found that whatever the modality, when the studied subjects have a natural vertical stance, the allocentric reference frame is gravitational or geocentric. It means that one axis of this allocentric reference frame is aligned with the gravitational vertical which is a strong reference in human sensorimotor capability (Darling et al. 2008). The allocentric reference frame seems to be common to each modality whereas this is not the case for the egocentric frame. It was also found for each modality that because of the so called “oblique effect” phenomenon the 3D reference frame forms an orthogonal trihedron. On a wide variety of tasks, when the test stimuli are oriented obliquely humans perform more poorly than when oriented in an horizontal or vertical direction. This anisotropic performance has been termed the “oblique effect” (Essock, 1980). This phenomenon was extensively studied in the case of visual perception (Cecala & Garner, 1986; Gentaz & Tschopp, 2002) and was brought to light also in the 3D case (Aznar-casanova et al. 2008). The review from Gentaz (Gentaz et al., 2008) suggests the presence of an oblique effect also in the haptic system and somato-vestibular system (Van Hof & Lagers-van Haselen, 1994) and the haptic processing of 3D orientations is clearly anisotropic as in 2D.

In the experiments reported by Gentaz the haptic oblique effect is observable in 3D when considering a plane-by-plane analysis, where the orientation of the horizontal and vertical axes in the frontal and sagittal planes, as well as the lateral and sagittal axes in the horizontal plane, are more accurately reproduced than the diagonal orientations even in the absence of any planar structure during the orientation reproduction phase. The oblique effect is also present at the cognitive level (Olson & Hildyard 1977) and is termed “oblique effect of class 2” (Essock, 1980). The same phenomenon has been reported to occur in the kinesthetic perceptual system (Baud-Bovy & Viviani, 2004) and for the motor control (Smyrnis et al., 2007). According to Gentaz (Gentaz, 2005) the vertical axis is privileged

Figure 2.

Body planes and allocentric reference frame (picture modified from an initial public domain image of the body planes).

because it gives the gravitation direction and the horizontal axis is also privileged because it corresponds to the visual horizon. The combination of these two axes forms the frontal plane. A third axis is necessary to complete the reference frame and we will follow Baud-Bovy and Gentaz (Baud-Bovy & Gentaz, 2006a) who argue that the orientation is internally coded with respect to the sagittal and frontal planes. The third axis in the sagittal plane gives the gaze direction when the body is in straight vertical position (see figure 2). It should also be noticed that when the body is in normal vertical position, the allocentric and egocentric frames of most of the modalities are congruent. From now on, as it was found to be common to all modalities, it will be considered that the orientations in space are given with respect to the allocentric reference frame as described in figure 2.

2.2.2. Common cross-modalities orientation coordinate system

From (Howard, 1982) the orientation of a line in 2D should be coded with an angle with respect to a reference axis in the visual system. When considering the orientation of a rod in 3D space, two independents parameters at least are necessary to define an orientation and it seems from Howard that angular parameters are psychophysically preferred. It can be suggested from the analysis in the previous section about the common allocentric reference frame that the orientation encoding system should be spherical. For instance, the set of angles elevation-azimuth could well be adapted to encode the orientation of a rod in the allocentric reference frame of figure 2. Indeed the vertical axis constitutes a reference for the elevation angle and the azimuth angle can be seen as a proximity indicator of an oriented handled rod with respect to the sagittal and frontal planes. It should be noticed that the sets of spherical angles can carry different names but all systems made up of two independent spherical angles are isomorphic. We find for instance for the first spherical angle the naming: elevation, nutation, pitch,… and for the second angle : precession, yaw, azimuth,… Soechting and Ross (Soechting & Ross, 1984) have early demonstrated psychophysically that the spherical system of angles elevation-yaw, is preferred in static conditions for the kinaesthetic proprioceptive perception of the arm orientation. Soechting et al. have concluded that the same coordinate system is also utilized in dynamic conditions (Soechting et al. 1986). According to Darling and Miller (Darling & Miller, 1995), perceived orientations of the forearm in the kinaesthetic proprioception modality are preferably coded in spherical coordinates (elevation-yaw) with respect to an ego-centric body-centered reference frame. This frame coincides with the allocentric gravitational frame when the body trunk is in natural vertical position. The spherical system in orientation encoding is also supported by Baud-Bovy and Gentaz for the haptic perceptual system (Baud-Bovy & Gentaz, 2006b). Another interesting study in the context of the ultrasound scanning application is about visual perception of the orientation of a plane surface in a 3D space. Gibson (Gibson, 1950) has early proposed that the visual orientation of a surface in space is internally coded in spherical slant-tilt form, which was supported by Stevens psychophysical experiments (Stevens, 1983). The slant-tilt angles system is a spherical orientation encoding of the vector normal to the plane. This angles system is exactly the same as the elevation-azimuth system. From the previous discussion it can be stated that the orientation of a rod in 3D space is coded in spherical coordinates made up of two angles. But the orientation of a complete frame of three axes requires at least three parameters. For the consistency of the coordinate system the third parameter should preferably be an angle. To our knowledge very few psychophysical or neurophysiological studies have been carried out to identify a full set of three angles, coding the orientation of a frame in space. In the proprioceptive kinaesthetic context, Darling and Gilchrist (Darling & Gilchrist, 1991) confirm the finding of Soechting and Ross (Soechting & Ross, 1984) that the angles elevation and yaw are parts of the preferred DOF system for hand orientation. They also suggest from their experimental results that the roll angle in the ZXY Cardan system could constitute the third preferred DOF to define a complete orientation of the hand. This suggestion was contradicted by Baud-Bovy and Viviani (Baud-Bovy & Viviani, 1998) who have shown that the last angle in the ZXY Cardan system is strongly correlated with both first angles of that system and also with the reaching length. This result lets think that the six sets of Cardan angles are improper to code the orientation of a frame in a biomimetic way.

2.2.3. Discussion for orientation coding system design

This literature review enables to establish that a psychophysically and sensorimotor adapted coordinate system to encode the orientation of a rod in 3D space should be made-up of a set of two spherical angles with respect to an allocentric gravitational reference frame. As a matter of fact the quaternions of Hamilton whereas elegant and efficient in interpolating orientations doesn’t seem to be the most appropriate system of orientation coding to fit with the psychophysical human abilities. In return even the most recent researches in the field are unable to identify a third necessary degree of freedom to define completely the orientation of a frame of three axes. This failure is probably due to the fact that this third DOF may be dependent on the task to perform and the kinematics postures of the acting arm and wrist during this task. Indeed the singularity arising in a minimal-coding system may be incompatible with the task to perform. For the task of handling a rod, a natural axis is given by the direction of the rod itself. Hence a spherical coordinate system such as (nutation, precession) should be used to code the orientation of the rod. Concerning the singularities it should be noticed that when orienting a rod the spherical coordinate system exhibits intrinsic singularities. Indeed when the nutation reaches 0 or π radians, the precession angle is undetermined which may lead to discontinuities in this angle. However, since there seems to be a consensus in favour of the spherical coordinate system in the field of the neurosciences it should be considered that those singularities truly reflect the human functioning mode and a biomimetic 3D orientation coding coordinate system should also probably exhibit such kind of singularity.

Figure 3.

a) Ultrasound plane generated by the transducer and (b) corresponding generated image during a medical ultrasound examination (computer generated image). In (c) we have a real ultrasound slice of the hepativ vein.

When considering the application of ultrasound scanning, the rod is in fact the transducer generating an ultrasound (US) plane (figure 3) and a 3-axis frame should be attached to the US-plane to define its 3D orientation in space. Indeed the full 3D orientation description is important in the case of an ultrasound plane since the notions of right and left of the plane have a meaning in such an application: when the transducer is rotated around its own axis with an angle of π radians, it generates an US-plane which geometrically speaking remains the same plane in space, but the image obtained does not remain the same, right and left are inverted. It will be considered in the remaining of this chapter that the axes of the frame attached to the plane are arranged as given in figure 3. Axis Z is chosen to correspond with the longitudinal axis of the transducer. Among the Euler angles systems excluding the Cardan systems, only the sets ZXZ and ZYZ can offer spherical angles to code the direction of vector Z. Those two sets are perfectly equivalent and only the ZXZ system will draw our attention in the forthcoming sections. Next section will provide a deeper experimental analysis of this system with respect to the sonography application.

2.3. Experimental correlation analysis

This section’s aim is to study the coupling within the angles of the ZXZ Euler system defining the orientation of the US plane when a medical expert performs a sonography examination on a real patient. This frame is preferably used in the robotized tele-echography context (Courreges et al. 2005; Gourdon et al. 1999; Vilchis et al. 2003) because it was found to be the one among existing standard frames that best suits the required mobilities during an US examination according to medical specialists (Gourdon et. al 1999). It is recalled that the DOF of the ZXZ Euler system is the triplet of angles: (ψ, θ, φ), where ψ is the precession, θ is the nutation and φ the self-rotation. We have set-up an experimental protocol in order to assess the dependencies of the degrees of freedom of the ZXZ Euler system and analyze the task to be performed by a medical tele-sonography robot. For that purpose we have captured the 6 DOFs movements of a real US specialist performing an abdominal examination of a healthy patient. The acquisition duration is about 5 minutes. During this examination the ultrasound (US) probe trajectories have been captured and recorded using a 6D magnetic localization sensor « Flock Of Bird » settled on the US probe (Fig. 6). This kind of examination is frequently performed in routine and especially in emergency situations. The trajectories applied to the US probe by any expert would be roughly the same since these gestures come from the learning of recommended medical practices and not only from individual experience and is also subject to the human hand kinematics limitations (Tempkin, 2008). To better identify the correlation between angles and to be independent of the angles range and rollover we have considered the angles velocities.

2.3.1. Correlation in the angles

Figure 4.

a) Phase plot of φ ˙ versus ψ ˙ and (b) correlation measures mc.

To emphasize the dependencies among the Euler angles for this kind of application, we have analyzed the phase plots of each angle derivative versus the other ones (Courreges et al. 2008b). We can easily obtain a correlation measure by considering the absolute value of the Pearson correlation coefficient (J. Cohen et al., 2002). Let us name this measure mc which is null for uncorrelated signals and is equal to 1 when the signals are linearly dependent. Figure 4b reports the correlation measures. From the plots obtained and correlation measures one can conclude that ψ ˙ and θ ˙ are uncorrelated, ϕ ˙ and θ ˙ are also uncorrelated, but ψ ˙ and ϕ ˙ are strongly correlated (see also figure 4a). Consequently it is clear that the ZXZ Euler system is not perfectly suited for this application, as it can’t provide decoupled DOF to describe the US scanning task.

2.3.2. Data analysis

These experimental data show that the spherical coordinates (ψ, θ) are uncorrelated DOF which is in agreement with the previous neuroscience literature review. The previous data also clearly reveal that the ZXZ Euler system exhibits a strong correlation between the precession and self rotation angles. In other words applying a variation on angle ψ should induce a near proportional variation on angle φ according to figure 4(a). Since this proportionality applies whatever the value of θ, one can notice that the correlation of the angles is not related to the singularity of this Euler system (when θ=0). Thereby we conclude that the standard ZXZ Euler system is not the most appropriate system to represent the human privileged rotations directions when handling a rod. This analysis shows the need for the definition of a new non standard frame capable of providing decoupled DOF for this kind of task. Since according to figure 4, ψ and φ angles are strongly correlated, a principal component analysis (PCA) (Joliffe, 2002) of the phase plots of the moves expressed in the ZXZ Euler system should provide us with decorrelated DOF. Indeed we can define a new coordinate system by using the Karhunen-Loève transform (Loève, 1978) which provides a very good decorrelation of the DOF. Let us name (α,θ,β) this DOF triplet. According to the PCA, θ is the same as the Euler nutation. Variables α and β are linear combination of the Euler angles ψ and φ (see equation 1). Whereas this transformation is simple, this PCA based system doesn’t provide meaningful variables: α and β are not intuitive for the hand-eye coordination. Moreover this transformation is optimal only for the particular conditions chosen for this experiment and may not be appropriate in other circumstances.

{ α = ( ψ φ ) / 2 β = ( ψ + φ ) / 2 E1

2.4. Sonography practice analysis

To obtain enough information to build a complete orientation coordinate system we studied the practice of sonography. More specifically we have analysed the way a 3D rotation is decomposed into simpler moves for pedagogical purpose in teaching the technique of medical US scanning. An US transducer works by generating a planar wave of ultrasounds. Waves reflected by the tissues are measured by the probe along with their time of flight, which enables to build a map of the density of the tissues (fig. 3c). Hence a medical expert has to think to rotate a plane in a 3D space to visualize the desired slice of the patient’s body. In fact sonographers are used to describe their scan orientation by reference to three basis rotations (Tempkin, 2008; Block, 2004): probe angulation, probe rocking (fig.5) and self rotation. And in standard medical practice the examination is executed in two phases combining these three basis rotations: first, choosing an initial incidence for the ultrasound plane combining probe angulation and probe rocking so as to perform a narrow sweep of the scanned organ. This first move is intended to grossly identify lesions or cysts. Second phase consists in rotating the US plane around the probe axis so as to identify small structures as tumors or traumas and precisely locate their extent. A bio-inspired orientation frame should exhibit this same combination of movements. Consequently it was found that the professional field of medical sonography gives practical guidelines to maneuver the orientation of a probe in 3D space. Whereas the conclusions of this analysis are related to the specific field of sonography it is interesting to notice that the pragmatical rules for this task are consistent with the previous neuroscience conclusions. Indeed the recommended movements of probe angulation and probe rocking correspond exactly to the plane slant-tilt rotations as indicated in section 2.2.1. Moreover this analysis provides a complete and intuitive combination of hand movements to set the full 3D orientation of a frame in space since this combination was practiced and taught for ages in the field of sonography. The next section takes advantage of this analysis to propose a new frame of orientation description with a set of three angles satisfying the human preferences when someone performs an intentional orientation tracking. It is reasonable to think that this new frame could be satisfying not only for the fields of sonography-related applications but also for any other tasks implying the rotations of a rod about a fixed centre of motion.

Figure 5.

Two basic moves in medical US scanning. (a) probe angulation for organ sweeping (in this illustration, moves of the US probe remain in the sagittal plane). (b) probe rocking used to extend the scanning plane.

Advertisement

3. H-angles, a new attitude coordinate system

We will develop here our methodology to design a new frame of angles satisfying the criteria of the previous analyses. We call this frame H-angles. We will provide the transformations from the H-angles to the rotation matrix and inversely from the rotation matrix to the H-angles. We will also conduct an exhaustive analysis of the singularities of the rotation matrix. This section will be concluded with the excellent decorrelation results brought by our new system compared to the Euler system. Notice that in the following the angles unit should be understood in radians.

3.1. Rotations combinations

From previous analysis we propose a new frame of angles which we name H-angles and denoted as (ψn, θn, φn) parameterising an orientation obtained by a sequence of two consecutives rotations as for medical practice. Let’s give some notations: let R0 = (O, X0, Y0, Z0) be the fixed main reference frame with centre O, axis (X0, Y0, Z0) and basis B0 = ( x 0 , y 0 , z 0 ). The basis obtained by the first transform on basis B0 is denoted by B1 = ( x 1 , y 1 , z 1 ). The framework with basis B1 and origin O is noted R1. The first movement is a complex rotation. According to the previous conventions the moving vector z gives the direction of the handled rod and vector x is normal to the moving plane corresponding to the US plane in sonography. This first rotation has two main functions:

  1. defining vector z 1 by its nutation θn ∈[0; π] and precession ψn ∈]- π; π], which is consistent with the neuroscience requirements depicted in §2.2.1;

  2. forcing vector y 1 to stay in the plane ( z 1 O y 0 ) so as to constrain the first move to be only a combination of probe angulation and probe rocking as for medical practice (§2.4). This constraint implies x 1 · y 0 = 0.

Given the previous constraints the orientation of basis B1 is not totally determined and the sign of the dot product x 1 . x 0 must be defined according to the kind of application. In order to obtain a transformation with the minimum rotation angle (for minimum rotation effort) we have chosen to set: sign( x 1 . x 0 ) = sign(cos(θn)). Notice that the sign of cos(θn) indicates in which hemisphere vector z 1 is. Hence for the typical application of rotating a rod about a fixed centre of motion with its workspace located in the North hemisphere of the orientation space, we have sign( x 1 . x 0 ) ≥ 0 indicating angle ( x 1 , x 0 ) is acute. Figure 6 provides a graphical overview of this first move, where the origin’s definition of ψn angle has been chosen in analogy with the precession of the ZXZ Euler angles.

Figure 6.

First movement from B0 to B1. Vectors z1, y0 and y1 are in the same plane.

The second transform is a simple rotation about vector z 1 of angle φn ∈]- π; π] which we name “self rotation”. On setting the same value for the precession and nutation angles in the ZXZ Euler system and in the new H-angles proposed system, we obtain the same position for vector z 1 . Hence the difference resides in the self rotation angle and the directions of vectors x and y. This modification of the self-rotation can be seen as an anticipation on the hand movement considering θn and ψn as inputs of this anticipator.

3.2. Rotations matrices

As indicated in the previous section the proposed orientation description system is decomposed into two sequential rotations. For each of these two rotations it is possible to write its rotation matrix and then multiply the matrices so as to express the global rotation matrix M. For the first rotation we denote M1 the rotation matrix. Let’s define (zx, zy, zz) the components of vector z 1 in basis B0. We have:

{ z x = sin θ n sin ψ n z y = sin θ n cos ψ n z z = cos θ n E2

Components of vectors x 1 and y 1 can then be expressed as a function of (zx, zy, zz) and we derive an expression of matrix M1 as function of (zx, zy, zz):

M1 = [ z z z x 2 + z z 2 z x z y z x 2 + z z 2 z x 0 z x 2 + z z 2 z y z x z x 2 + z z 2 z z z y z x 2 + z z 2 z z ] E3

For the second rotation, we note M2 the standard rotation matrix operating a rotation about vector z 1 in the frame R1 with magnitude φn. The global rotation matrix M in the frame R0 can then be computed as M = M1.M2 which leads to the following expression of M as a function of the H-angles:

M = [ cos θ n cos φ n + 1 2 sin 2 θ n sin ( 2 ψ n ) sin φ n 1 sin 2 θ n cos 2 ψ n cos θ n sin φ n + 1 2 sin 2 θ n sin ( 2 ψ n ) cos φ n 1 sin 2 θ n cos 2 ψ n sin θ n sin ψ n sin φ n 1 sin 2 θ n cos 2 ψ n cos φ n 1 sin 2 θ n cos 2 ψ n sin θ n cos ψ n sin θ n ( sin ψ n cos φ n + cos θ n cos ψ n sin φ n ) 1 sin 2 θ n cos 2 ψ n sin θ n ( sin ψ n sin φ n + cos θ n cos ψ n cos φ n ) 1 sin 2 θ n cos 2 ψ n cos θ n ] E4

As a first analysis we can see that matrix M is not defined when θn = π/2 and ψn = 0 or π. When these conditions are met, vectors x 1 and y 1 are undetermined. Hence M can be rewritten as function of the components of vector x 1 in basis B0 since we have x 1 = (xx, 0, xz) and zx=zz=0 and zy =-cos(ψn) = ±1. We find:

M = [ x x cos φ n x z cos ψ n sin φ n x z cos ψ n cos φ n x x sin φ n 0 0 0 cos ψ n x z cos φ n + x x cos ψ n sin φ n x z sin φ n + x x cos ψ n cos φ n 0 ] E5

The values of xx and xz can be context dependent. In practical applications of rotating a rod about a fixed centre of motion, the case θn =π/2 is a limit hardly reachable. It can be found severable possible reasons to this:

  1. the application itself exhibits bounds that avoid reaching such a limit for θn such as the robotised tele-sonography application (Courreges et al., 2008a);

  2. or more simply the centre of motion may be on a plane surface and this surface avoids the hand from reaching this limit nutation.

3.3. Expression of the H-angles (ψnnn) from the rotation matrix M components; singularities analysis

3.3.1. Extraction of the H-angles from the matrix outside singularities

The component of matrix M at line i and column j is noted mij. We find from equation (4):

θ n = arccos ( m 33 ) E6

Outside singular configurations it can be found:

ψ n = a tan 2 ( m 13 , m 23 ) E7
φ n = a tan 2 ( m 21 , m 22 ) E8

Where “atan2” is an algorithmic function able to compute the arc tangent from two arguments so as to determine the quadrant of the angle on the trigonometric circle.

3.3.2. Singularities analysis

Two types of singularities can be identified and are studied hereafter.

  1. When m13=0 and m23=0 simultaneously.

  2. When m21=0 and m22=0.

When this singularity is met angle φn is undetermined. This situation implies both zz and zx are null simultaneously, hence θn = π/2 and ψn = 0 or π and we meet the case where vector x 1 is undetermined. When matrix M is rewritten according to equation (5), one can find:

φ n = atan2 ( x x m 12 x z m 32 , x x m 11 + x z m 31 ) E10

Components xx and xz can be chosen freely according to the context but have to satisfy: x x 2 + x z 2 = 1 .

3.4. Decorrelation results

We have computed the velocities of the new H-angles attitude system for the same medical trajectory than in section 2.3 with the ZXZ Euler angles (Courreges et al. 2008b). As one could expect from the definition of the new system, there are no changes on ψ ˙ n versus θ ˙ n compare to the plot ψ ˙ versus θ ˙ of the ZXZ Euler angles velocities, hence they are still uncorrelated in our new system. We have obtained a good decorrelation between φ ˙ n and ψ ˙ n with a low coefficient mc = 0.0116 which is a great improvement compared to the Euler system. The correlation mc = 0.1552 of the variables φ ˙ n and θ ˙ n has been raised in a relative important way compare to the homologous variables in the Euler system. However this value still remains low enough to consider the angles uncorrelated. To quantify the decorrelation improvement we can compute the average correlation coefficient for each system of angles. Our new system exhibits an average correlation m ^ cn =0.057 whereas the ZXZ Euler angles system exhibits an average correlation m ^ cE =0.339 (figure 4b). Consequently our new system provides a decorrelation improvement of more than 83% with respect to the average correlation measure. For comparison purpose, the average correlation measure of the PCA based system given in equation (1) is m ^ cKL =0.0302 which is much closer to our new system than to the Euler system.

Advertisement

4. Application: ergonomic mouse based interface for 3D orientation in robotised tele-sonography

We will show in this section how to exploit our new frame of angles to render the use of the standard IT mouse feasible to pilot efficiently the 3D orientation of a rod. We have tested this technique in the context of the particular application of robotised telesonography. In a first subsection we will draw some design requirements for 3D rotations techniques with a mouse. The following subsection will provide a short overview of existing techniques to set a 3D orientation with a mouse. This subsection is focused on reporting the evaluation and comparison of the various techniques considered with respect to the design principles promulgated in the preceding subsection. Next subsection will present our approach in exploiting the new frame of angles. The fourth subsection will describe the chosen psychophysical experimental protocol along with quantified results.

4.1. Design recommendations for 3D rotations techniques with a mouse

From their experience Bade et al. (Bade et al., 2005) have promulgated a number of four general principles as crucial for predictable and pleasing rotation techniques:

  1. Similar actions should provoke similar reactions: the same mouse movement should not result in varying rotations.

  2. Direction of rotation should match the direction of 2d pointing device movement.

  3. 3d rotation should be transitive: the rotation technique must not have hysteresis. In other words to one pointing location with the interface should correspond one and only one 3D orientation whatever the trajectory ending to that location.

  4. The control-to-display ratio should be customizable: tuneable parameters must be available to find the best compromise between speed and accuracy according to the task and user preferences and is therefore crucial for performance and user satisfaction.

  5. The input interface should allow the setting of an orientation by an integrated manipulation: Hinckley showed (Hinckley et al., 1997) that the mental model of rotation is an integral manipulation in opposition with separable manipulation as defined by Jacob (Jacob R.J.K. et al., 1994). From a practical point of view the input interface should be designed to enable a simultaneous variation of each degree of freedom of the rotation.

4.2. Overview of 3D rotations techniques with a mouse and evaluations

This field of research has not much evolved this last decade and the works related to the use of the computer mouse to set an orientation in 3D are in applications to the fundamental research topic known as “2D interface for 3D orientation”. Hence the following review reports some techniques where the mouse is considered as an input device with only two DOFs and which omit the input of the mouse wheel. The most well known and popular techniques because they are preferred (Chen et al., 1988) are based on the virtual trackball principle. It consists in surrounding the object to rotate by a virtual sphere fixed with the object (but the sphere may not be always displayed). The object is rotated by operating the virtual sphere with the mouse pointer. The common principle of these techniques to generate a rotation consists in letting the user select two locations with the mouse pointer. The first position is validated by a mouse click and remains constant until the next click; the second position can be moving. Those two points are then mapped to the virtual sphere and the projected points on the sphere enable to define an arc on a great circle. The angle of rotation is chosen as the aperture angle of the arc viewed from the sphere centre; and the axis of rotation is chosen perpendicular to the plane formed by the centre of the sphere and the arc. The virtual trackball-like techniques are preferred among other existing techniques with 2D input devices because they enable perform faster for both rotations and inspection tasks (Jacob I. & Oliver, 1995). From Henriksen (Henriksen et al., 2004): “Virtual trackballs allow rotation along several dimensions simultaneously and integrate controller and the object controlled, as in direct manipulation. The main drawback of virtual trackballs is a lack of thorough mathematical description of the projection from mouse movement onto a rotation.” This class of techniques comprise the techniques known as the Virtual sphere of Chen (Chen et al., 1988), the Arcball of Shoemake (Shoemake, 1992), the Bell’s virtual trackball (Henriksen et al., 2004), the two-axis valuator (Chen et al., 1988) and the two-axis valuator with fixed up-vector (Bade et al., 2005). These techniques essentially differ in their plane-to-sphere projection. Not much experimental comparisons and evaluations of these techniques have been proposed in the literature. Bade et al. (Bade et al., 2005) have presented a tabular comparison of these techniques (excluding Chen’s Virtual Sphere) with respect to the first four principles reported in the preceding section 4.1. We propose hereafter an extended comparison table (table 1 below).

Table 1.

Comparison of state of the art rotation techniques with respect to the five design principles.

For Chen’s Virtual sphere we set principle 4 as satisfied since the ratio between the sphere radius and the radius of the mouse’s workspace on the table can be tuned which will affect the compromise velocity/precision. Shoemake’s Arcball renounces to this principle to be able to satisfy principle 3 (avoid hysteresis) (Hinckley et al., 1997). We set the principle 5 to be unsatisfied for each technique despite the preceding quote of Henriksen because once a first point is selected with the mouse the possible rotation axes are constraint to lie in a bounded space defined by the position selected. Hence every rotation can’t be performed within a single smooth hand movement. This is supported by Hinckley (Hinckley et al., 1997) who argues that practically both the Virtual Sphere and the Arcball techniques require the user to achieve some orientations by composing multiple rotations each initiated by a cursor repositioning and mouse click which breaks the movement smoothness. The study from Hinckley did not provide evidence that the Arcball performs any better than the Virtual Sphere for both accuracy and completion time. The main usability problem with the virtual trackballs compare to free hand input devices was that users were unsure about the difference between being inside and outside the virtual sphere. The experiments of Bade et al. (Bade et al., 2005) combining inspection and rotations tasks revealed that users significantly perform faster with the two-axis valuator technique which was perceived as more predictable and comfortable for task completion than other trackball techniques. Bade et al. also suggest that these results were expected as the two-axis valuator fulfils most of the design principles. In these experiments the Shoemake’s Arcball arrives in second position outstripping the Bell’s virtual trackball and the two-axis valuator with fixed up-vector. A strong drawback of these techniques comes from their lack in satisfying principle 5 which make these techniques much slower than compared to the natural rotation of object by hand in 3D free space (Hinckley et al., 1997; Pan, 2008). The proposed method presented hereafter enables to satisfy all four principles within a large continuous range of orientations.

4.3. Angles coding in setting a rod attitude with a mouse

To ease the hand-eye co-ordination of the operator, for interface design, it is necessary to take care that the operator can easily assess the orientation changes when moving by hand the input device (Wolpert & Ghahramani, 2000). That is why we have chosen a biomimetic approach for the orientation control with a mouse. As discussed previously, the attitude of a US probe is defined by a sequence of two movements where the first movement enables to set the nutation and precession of the probe axis. This observation leads to state that humans have the sensorimotor ability to easily control the nutation and precession of a rod. In fact defining the precession and nutation of a constant length rod is the same task as placing a point, representing the top end of that rod on a sphere. This sphere radius is the length of the rod, namely in our application, the US probe length, and the sphere centre is the probe bottom tip. In a telesonography application only the north hemisphere is to be considered. It is possible to make a mental bijective transform from the orientation hemisphere to the mouse plane. However such a projection is not unique (Kennedy & Kopp, 2001). To make a proper choice of a particular sphere-to-plan projection it is necessary to account for the human sensorimotor abilities so as to maintain decoupled DOF with respect to the nutation and precession. We have chosen the class of projections that preserves the precession angle, namely the azimuthal projections, which are projections of the sphere on a tangent plane. The chosen tangent point is the North Pole, which defines the so important vertical axis (see §.2.2.1), because such transform generates less distortion around the tangent point. This choice also allows to visualize the mouse control of the probe from an overhead view (fig. 7b), where the origin is the tangent point between the orientation sphere and the plane. This transform guarantees the hand-eye coordination since it allows establishing one to one decoupled relations between the orientation DOF of probe nutation-precession and the visually and kinesthetically perceived polar coordinates of the mouse. Indeed the precession is growing and linearly dependent on the polar angle of the mouse and the nutation is a growing function of the distance from the mouse to the origin. To totally determine the projection, we have to set the perspective point. It has been reported that, because it is cognitively preferred, the path adopted by hand when moving on a plane from an initial position to a target position is a fairly straight-line (Sergio & Scott, 1998; Desmurget et al., 1999). Consequently a sphere to map projection that preserves orthodromy should be preferred (the shortest path between two points on the sphere - which is a great circle - should map to a straight line on the plane). Such a projection is a gnomonic projection where the projection center is at the center of the sphere (see figure 7a). Despite the drawback of the chosen sphere-to-plane transform implying sending a nutation of π/2 radian to infinity, it is however well suited to tele-sonography application for routine examination. Indeed we have shown in a previous work that the nutation remains lower than π/4 radians during 95% of the examination time in routine abdominal US scanning (Courreges et al., 2008a). To rotate the probe around its own axis and define the self rotation angle, the mouse scrolling wheel is used.

Figure 7.

a) Gnomonic projection. Each point of the North hemisphere is projected on the azimuthal plane along a radius originating from the sphere center. In (b), use of a computer mouse as telerobotic control interface to set the frame of H-angles.

This point can be a limitation since a computer mouse wheel moves generally in discreet steps. Hence a compromise has to be adopted when choosing the increment factor to convert the wheel increment to angle increment. A great factor allows driving fast but reduces the accuracy whereas a small factor exhibits the contrary. This factor has to be chosen according to the mouse wheel’s total number of increments and by considering the application needs. The representation proposed in figure 7b makes the virtual mouse controlled probe behave as if it were of variable length. The bottom tip of this virtual variable length probe is fixed, and top corresponds to the mouse position on the plan as is shown in the illustration figure 7b. The presented experiment revealed that operators could easily adapt to a variable length virtual probe (in the range employed for the nutation in this experiment) since it doesn’t affect the attitude of the probe. To operate the simulated robot, the first step is to fix an origin for the mouse pointer. This origin position would correspond to a null calibration of every angle, where the probe is in vertical position. The mouse controls directly the angles in the chosen frame of angles: (ψnnn) for the new bio-inspired H-angles system envisaged and (ψ,θ,φ) for the Euler angles. The following equations use the new angles but are the same with the Euler angles. Let’s define the following notations used in figure 7b:

Δ x : displacement along X axis of the mouse from its origin position. Δ y : displacement along Y axis of the mouse from its origin position.

L: length of the projection of the mouse controlled probe in the XY plan.

H0: minimal length of the virtual variable length mouse controlled probe. HO is a tunable parameter enabling to set the control sensibility on angle θn such that the preceding design principle 4 (control-to-display ratio) can be satisfied.

Δ W i n c : mouse wheel increments variation (in radians) from the origin calibration position. Ki_a: conversion factor from mouse wheel increments to angle in radians. This factor is tunable by the user and contributes to satisfy the design principle 4.

Using the previous definitions and according to figure 7b, the expressions of the orientation angles as a function of the mouse inputs are:

θ n = atan ( L H 0 ) E11
ψ n = π / 2 + atan2 ( Δ x , Δ y ) E12
φ n = K i _ a . Δ W i n c E13

One can notice that when making small incremental displacement of the mouse, the mental transform from a sphere to a plan is lighten since a spherical surface can be well locally approximated to a planar surface if the constant HO is taken large enough compared to the variations of L. By construction and the exploiting of the mouse wheel our approach enables fulfilling all five design principles (section 4.1). In particular principle 5 is enforced by the fact that there is no need to proceed to multiple mouse button clicks to change an orientation. However in its current form our system does not allow to reach all orientations. A nutation of π/2 radians can’t be attained. Typically we restricted the nutation to lie within the range [0; π/4] radians.

4.4. Experimental assessment protocol

The experimental setup is made to resemble the actual tele-echography setting that would be used in real conditions when using a mouse as interface as depicted in previous section. Consequently the setup is made up of a PC workstation displaying in 3D a simulated tele-echography robot handling a bright green probe and which end-effector orientation is controllable by the computer mouse (fig. 9b). The Robot Simulator has been built within Windows XP environment using OpenGL and Microsoft Visual C++. It accurately simulates the design and mobility of an actual OTELO tele-echography robot (Delgorge et al., 2005)(see figure 8). The view chosen for experimentation (shown on fig. 8a) is in accordance with the actual scenario in a tele-echography operation (Canero et al. 2005). In figure 8a, the red rings on the right side of the screen, which look like a target, work as a guide to move the mouse. Its centre is chosen as the origin for the mouse. And the farthest ring defines the region of maximum bending in nutation of the probe. These circles where useful for the users during their learning phase only. Human-Machine interfaces (HMI) are generally assessed with static targets, which give no information on their dynamic capabilities. Hence we have imagined an original way of interface evaluation consisting for the subjects to track the moves of an opaque red dummy probe which is overlaid on screen and animated from a previously recorded datafile during a real abdominal US examination. Some parts of the robot have been given transparency to make it easier to visualize both the probes simultaneously. We only have considered the orientations in this experiment; hence both the dummy probe and the simulated robot probe are fixed in translation.

Figure 8.

a) The OTELO tele-echography robot simulated in OpenGL within a virtual-reality simulator for psychophysical assessment. (b) Actual OTELO robot in action.

A three-axis framework with differently coloured axis was also attached to this dummy probe and displayed for a better visualization of its orientation. Better telepresence could be achieved with a HMD (Head Mounted Display) for depth perception. However this would annihilate the interest in using a computer mouse for proposing simple low cost control interface, so we preferred using a standard 2D screen displaying 3D graphics. This teleoperation is simulated with no time-delay to avoid interfering effects on the assessment of the new H-angles frame. It should be noticed that this protocol induces a cognitive load on the test users that is heavier than on a medical specialist who would perform a real tele-echography examination by means of the mouse as input device. This is due to the fact that in our protocol users have a few prior knowledge of the trajectory to be tracked whereas the medical expert imposes his desired trajectory that he is used and trained to plan to navigate through the human body with a US image as feedback. In other words in real practice the movements are intentional and performed in a know environment with known landmarks whereas in the proposed experimental protocol the trajectory to track is imposed. Nevertheless it is not desirable to try filling this workload gap by providing the test users with a trajectory representation in the angles space. Indeed this trick would unpredictably lighten the cognitive load compare to the medical expert who has to make mental rotation in 3D space which is known to be a heavy mental load (Shepard & Metzler, 1971). Six different non-medical test users were solicited to carry out the experiment. They were all used to mouse manipulation and computer interaction. Each untrained user was shown the animation once just to get accustomed to the trajectory. Then he had an unlimited training session to understand how to control the robot orientation by the mouse and to have a preview of the trajectory to track. No more than five minutes of training was sufficient for every test user. The medical reference trajectory duration is three minutes long. Each test user had three trials to track this trajectory by using the H-angles coordinate system associated to the mouse, and next they had three other trials using the standard ZXZ Euler system, for comparison purpose. The session of orientation matching with the Euler system is intended to assess the performance improvement provided with the H-angles system. For each smallest possible turn of the mouse wheel we have set an increase of 10º for angle φn. This causes a limitation to the accuracy. However, whereas a smaller increment in the angle would have increased the precision, this would have make the robot probe difficult to rotate fast enough, to be able to track the animated probe rotations. We needed to strike a balance between, good rotation speed and higher precision, so as to obtain the optimum results. With the Euler system it was noticed that allowing faster rotations gave better results.

4.5. Psychophysical results

The orientation tracking error is computed as the minimum rotation angle between the frameworks of the controlled probe and dummy probe which is known to be a good metric in the rotations space. Let us notice this angle as Ω.

Figure 9.

Observed variations in average Ω values with bounding curves at Ω plus or minus three times the standard deviation σ.

Figure 9 reports the average of Ω orientation error among the users versus time of trajectory tracking. First plot is for the mouse used to set the Euler angles and second plot for the mouse used to set the H-angles. Plots of figure 9 reveal practically an indisputable superiority of our new system compared to standard Euler system. With our new system the tracking error remains most of the time lower than 10 , whereas with the Euler system the error rarely drops below 10 . Whatever the experimentation time considered, the error with the new system is at least two times lower than with the Euler system. From testimony of the test users the new system acts as if the self-rotation were anticipated. Whereas with the Euler system the trackings were confusing mainly because of the singularity of this system tending to produce fast variations of the X and Y axis when the nutation is close to zero. Notice that the presented results integrate the human response time lag. The simulator measures the difference in the framework angles in real time but the user takes some time to perceive and react to the animated moves. Hence there is always a lag in the controlled probe’s movements compared to the animated probe. This lag is not a constant in time, but perhaps is a function of various other unaccounted parameters. For example the probe velocity, the visual angle and so on. The overall average values of Ω obtained for the two systems can also be used to compare the degree of effectiveness of the two systems. For the Euler system average Ω value was found 46.28 while for our new attitude coordinate system it was observed to be 8.69 . The values of standard deviation can be understood as the inconsistency, in being able to accurately orient the probe. Its averaged values over time where observed as 1.85 for Euler system and 0.347 with the new system.

Advertisement

5. Conclusion

We designed a new coordinate system called H-angles; to parameterize the attitude of an object in 3D space such as the Human central nervous system would do when rotating the object about a fixed centre of rotation. The final cue to derive this system was obtained from the analysis of the medical sonography practice. In the practical case considered we showed experimentally that our system largely outperforms the Euler systems in the decorrelation of the DOFs and in practical usability of the mouse as input device for 3D rotations. The design considerations lead to think that the H-angles system should theoretically maintain its good properties in a large range of applications where the task is to rotate an object about a fixed point. Some more experimental evaluations will have to be carried out to verify this claim. We have exploited the H-angles to design an interface from the computer mouse to the attitude parameterisation, which satisfies the hand-eye co-ordination needs for the purpose of poly-articulated robot orientation telecontrol through computer network. Our psychophysical results in the context of a simulated robotised tele-sonography are very promising and should lead to some more experimental evaluation in comparison with the virtual trackballs techniques. Our system allows imagining the performing of 6D mouse-based teleoperation by using switching modes between orientation and translation control with a standard wheeled mouse. Some further application of the H-angles system could be for hand orientation prediction which should lead to a new approach of predictive control.

References

  1. 1. Aznar-casanova J. A. Torrents A. Alves N. T. 2008 The role of vertical disparities in the oblique effect. Psychol. Neurosci., 1 2 (2008), 167 175
  2. 2. Bade R. Ritter F. Preim B. 2005 Usability comparison of mouse-based interaction techniques for predictable 3d rotation. SmartGraphics, (2005), 138 150
  3. 3. Baud-Bovy G. Viviani P. 1998 Pointing to kinaesthetic targets in space. J. Neuroscience, 18 (1998), 1528 1545
  4. 4. Baud-Bovy G. Viviani P. 2004 Amplitude and direction errors in kinesthetic pointing. Exp. Brain Res., 157 (2004), 197 214
  5. 5. Baud-Bovy G. Gentaz E. 2006a The hapic reproduction of orientations in three dimensional space. Springer-Verlag Exp. Brain Res., 172 (2006), 283 300
  6. 6. Baud-Bovy G. Gentaz E. 2006b The haptic perception of orientations in the frontal plane and in space. Exp Brain Res, 172 (2006), 283 300
  7. 7. Block B. 2004 The Practice of Ultrasound, a step by step guide to abdominal scanning. Georg Thieme Verlag, 3-13138-361-5 Germany
  8. 8. Burgess N. 2006 Spatial memory: how egocentric and allocentric combine. Trends in Cognitive Sciences, 10 12 (Dec. 2006), 551 557
  9. 9. Canero C. Thomos N. Triantafyllidis G. A. Litos G. C. Strintzis M. G. 2005 Mobile tele-echography: user interface design. IEEE Trans. Inform. Technol. Biomed., 9 1 (2005), 44 49
  10. 10. Cecala A. J. Garner W. R. 1986 Internal frame of reference as a determinant of the oblique effect. J. Exp. Psychol. Hum. Percept. Perform., 12 (1986), 314 323
  11. 11. Chen M. Mountford S. J. Sellen A. 1988 A Study in Interactive 3-D Rotation Using 2-D Control Devices. Computer Graphics, 22 4 (Aug. 1988), 121 129
  12. 12. Cohen J. Cohen P. West S. G. Aiken L. S. 2002 Applied multiple regression/correlation analysis for the behavioral sciences (3rd edition), Lawrence Erlbaum Assoc Inc, 0-80582-223-2 New Jersey (USA)
  13. 13. Cohen Y. E. Andersen R. A. 2002 A common reference frame for movement plans in the posterior parietal cortex. Nature Review Neuroscience, 3 (2002), 553 562
  14. 14. Courreges F. Vieyres P. Poisson G. Novales C. 2005 Real-Time Singularity Controller for a Tele-Operated Medical Ecography Robot. IEEE/RSJ IROS, Edmonton, Alberta, Canada, Aug. 5th, 2005
  15. 15. Courreges F. Poisson G. Vieyres P. 2008a Robotized Tele-Echography. In : Teleradiology, Sajeesh Kumar, Elizabeth Krupinshi (Eds.), 139 153 , Springer-Verlag, 978-3-54078-870-6 Berlin
  16. 16. Courreges F. Vieyres P. Poisson G. 2008b DOF Analysis of the Ultrasonography Technique for Improving Ergonomy in Tele-Echography. IEEE International Conference on Robotics and Biomimetics, Robio’08, 978-1-42442-678-2 Bangkok, Thailand, 2184 2189 , Feb. 2009
  17. 17. Darling W. G. Gilchrist L. 1991 Is there a preferred coordinate system for perception of hand orientation in 3-dimensional space? Exp Brain Res, 85 (1991), 405 416
  18. 18. Darling W. G. Miller G. F. 1995 Perception of arm orientation in three-dimensional space. Exp. Brain Res, 102 (1995), 495 502
  19. 19. Darling W. G. Hondzinski J. M. 1999 Kinesthetic perceptions of earth- and body-fixed axes. Exp Brain Res, 126 (1999), 417 430
  20. 20. Darling W. G. Viaene A. N. Peterson C. R. Schmiedeler J. P. 2008 Perception of hand motion direction uses a gravitational reference. Springer-Verlag Journal of Exp. Brain Res. 186 (2008), 237 248
  21. 21. Delgorge C. Courreges F. Al Bassit. L. Novales C. Rosenberger C. Smith-Guerin . N. Brù C. Gilabert R. Vannoni M. Poisson G. Vieyres P. 2005 A Tele-operated mobile ultrasound scanner using a light weight robot. IEEE Trans. on Innovation Technology in Biomedicine Special Issue, 9 1 (March 2005), 50 58
  22. 22. Desmurget M. Pélisson D. Rossetti Y. Prablanc C. 1998 From eye to hand: planning goal-directed movements. Neurosc. and Biobehav. Rev., 22 (1998), 761 788
  23. 23. Desmurget M. Prablanc C. Jordan M. Jeannerod M. 1999 Are reaching movements planned to be straight and invariant in the extrinsic space? Kinematic comparison between compliant and unconstrained motions. Quarterly Journal of Experimental Psychology, 52 (1999), 981 1020
  24. 24. Dorst L. Doran Ch. Lasenby J. . Eds 2002 Applications of Geometric Algebra in Computer Science and Engineering, Birkhäuser, 0-81764-267-6 York, USA
  25. 25. Essock E. A. 1980 The oblique effect of stimulus identification considered with respect to two classes of oblique effects. Perception, 9 (1980), 37 46
  26. 26. Gentaz E. Ballaz C. 2000 La perception visuelle des orientations et l’effet de l’oblique. Ann. Psychol., 100 (2002), 715 744 .
  27. 27. Gentaz E. Tschopp C. 2002 The oblique effect in the visual perception of orientations. In : Advances in Psychology Research, Shovov S. (Ed.), 11 137 163 , Nova Sciences Publishers, 1-59033-186-9York
  28. 28. Gentaz E. 2005 Explorer pour percevoir l’espace avec la main. In : Agir dans l’espace, Thinus-Blanc, C. (Ed), MSH, 33 56 , Paris
  29. 29. Gentaz E. Baud-Bovy G. Luyat M. 2008 The haptic perception of spatial orientations. Experimental Brain Research, 187 (2008), 331 348
  30. 30. Gibson J. J. 1950 The perception of visual surfaces. Am. J. Psychol., 63 (1950), 367 384
  31. 31. Goodale M. A. Jakobson L. S. Servos P. 1996 The visual pathways mediating perception and prehension, In : Hand and Brain, ed. by Wing, A. M., Haggard, P. & Flanagan, J. R., 15 31 , Academic Press, New York
  32. 32. Gourdon A. Poignet Ph. Poisson G. Vieyres P. Marché P. 1999 A new robotic mechanism for medical application. IEEE/ASM Int. Conf. On Adv. Intel. Mechatronics, 33 38 , Atlanta, USA, September 1999
  33. 33. Hamilton W. R. 1843 On a new Species of Imaginaries Quantities connected with a theory of Quaternions. Proc. of the Royal Irish Academy, 2 (Nov. 13, 1843), 424 434
  34. 34. Hinckley K. Tulio J. Pausch R. Proflitt D. Kassell N. 1997 Usability Analysis of 3D Rotation Techniques. Proceedings of ACM Symp. User Interface Software and Technology (1 10 ), New York : Association for computing machinery, 1997
  35. 35. Henriksen K. Sporring J. Hornbaek K. 2004 Virtual trackballs revisited. IEEE Transactions on Visualization and Computer Graphics, 10 2 (2004), 206 216
  36. 36. Howard I. P. 1982 Human visual orientation, Wiley, 978-0-47127-946-4 New York
  37. 37. Jacob R. J. K. Sibert L. E. Mcfarlane D. C. Mullen M. P. . Jr 1994 Integrality and Separability of Input Devices". ACM Transactions on Computer-Human Interaction, 1 1 (March 1994), 3 26
  38. 38. Jacob I. Oliver J. 1995 Evaluation of Techniques for Specifying 3D Rotations with a 2D Input Device. Proc. Human Computer Interaction Symp. ‘95, (1995), 63 76
  39. 39. Jolliffe I. T. 2002 Principal Component Analysis (2nd edition), Springer series in Statistics, 978-0-38795-442-4 New-York
  40. 40. Kennedy M. Kopp S. 2001 Understanding Map Projections. Esri Press, 978-1-58948-003-2 Redlands (USA)
  41. 41. Loève M. 1978 Probability theory Vol. II, ( Graduate Texts in Mathematics) (4th edition), 46 Spinger Verlag, 0-38790-262-7York (USA)
  42. 42. Miall R. C. Wolpert D. M. 1996 Forward Models for Physiological Motor Control. Neural Networks, 9 8 (1996), 1265 1279
  43. 43. Norman J. 2002 Two visual systems and two theories of perception: An attempt to reconcile the constructivist and ecological approaches. Behavioral and brain sciences, 25 1 (2002), 73 144
  44. 44. Olson D. R. Hildyard A. 1977 On the mental representation of oblique orientation. Canadian Journal of Psychology, 31 1 (March 1977), 3 13 , 0008-4255
  45. 45. Paillard J. 1987 Cognitive versus sensorimotor encoding of spatial information. In: Cognitive processes and spatial orientation in animal and man, Vol. II, Neurophysiology and developmental aspects, Ellen, P., & Thinus-Blanc, C. (Eds.), 43 77 , Martinus Nijhoff, 9-02473-448-7 Netherland
  46. 46. Pan Q. 2008 Techniques d’interactions mixtes isotonique et élastique pour la sélection 2D et la navigation / manipulation 3D. PhD thesis of the Lille 1 University (France), defended the 19 décembre 2008, theme instrumentation and advanced analysis
  47. 47. Sergio L. E. Scott S. H. 1998 Hand and joints paths during reaching movements with and without vision”. Journal of Exp. Brain. Res., 122 (1998), 157 164
  48. 48. Sheerer E. 1984 Motor Theories of Cognitive Structure: A Historical Review. In : Cognition and Motor Processes, Prinz W., & Sanders A. F. (Eds), SpringerVerlag, 978-0-38712-855-9 Berlin
  49. 49. Shepard R. Metzler J. 1971 Mental rotation of three dimensional objects. Science, 171 972 (1971), 701 703 .
  50. 50. Smyrnis N. Mantas A. Evdokimidis I. 2007 The ‘‘motor oblique effect’’: perceptual direction discrimination and pointing to memorized visual targets share the same preference for cardinal orientations. J. Neurophysiol., 97 (2007), 1068 1077
  51. 51. Soechting J. F. Ross B. 1984 Psychophysical determination of coordinate representation of human arm orientation. Journal of Neuroscience, 13 (1984), 595 604
  52. 52. Soechting J. F. Lacquaniti E. Terzuolo C. A. 1986 Coordination of arm movements in three-dimensional space. Sensorimotor mapping during drawing movement. Neuroscience, 17 2 (Feb. 1986), 295 311
  53. 53. Soechting J. F. Flanders M. 1995 Psychophysical approaches to motor control. Current Opinion in Neurobiology, 5 (1995), 742 748
  54. 54. Shoemake K. 1985 Animating rotation with quaternion curves. Computer Graphics (Proceedings of SIGGRAPH 85), 19 1985),245 254
  55. 55. Shoemake K. 1992 ARCBALL: A User Interface for Specifying Three-Dimensional Orientation Using a Mouse. Graphics Interface, (1992), 151 156
  56. 56. Stevens K. A. 1983 Slant-tilt: the visual encoding of surface orientation. Biological Cybernetics, 46 (1983), 183 195
  57. 57. Tempkin B. B. 2008 Ultrasound Scanning, Principles and Protocols (edition), Saunders, Elsevier publishers, 3rd ed., 0-72160-636-1 USA
  58. 58. Van Hof M. W. Lagers-van Haselen. G. C. 1994 The oblique effect in the human somatic sensory system. Acta Neurobiol. Experimentalis, 54 (1994), 259 262
  59. 59. Vilchis A. Troccaz J. Cinquin P. Masuda K. Pellissier F. 2003 A new robot architecture for tele-echography. IEEE Trans. Rob. and Autom., Special issue on Medic. Rob., 19 5 (October 2003), 922 926
  60. 60. Volcic R. Kappers A. M. L. 2008 Allocentric and egocentric reference frames in the processing of three-dimensional haptic space. Experimental Brain Research, 188 (2008), 199 213 .
  61. 61. Wang Y. Mackenzie C. Summers V. A. Booth K. S. 1998 The structure of object Transportation and Orientation in Human-Computer Interaction. Proceedings of ACM CHI’98, 312 319
  62. 62. Wolpert D. M. Ghahramani Z. 2000 Computational principles of movement neuroscience. Nature America, Neurosci., 3 (2000), 1212 1217

Written By

Courreges Fabien

Submitted: 07 October 2010 Published: 09 June 2011