InTech uses cookies to offer you the best online experience. By continuing to use our site, you agree to our Privacy Policy.

Robotics » "Advances in Human-Robot Interaction", book edited by Vladimir A. Kulyukin, ISBN 978-953-307-020-9, Published: December 1, 2009 under CC BY-NC-SA 3.0 license. © The Author(s).

Chapter 8

Generating Natural Interactive Motion in Android Based on Situation-Dependent Motion Variety

By Takashi Minato and Hiroshi Ishiguro
DOI: 10.5772/6830

Article top

Overview

An example of motion decomposition in a reaching movement.
Figure 1. An example of motion decomposition in a reaching movement.
Gestures to be modelled. A subject reaches out and touches an object or a person.
Figure 2. Gestures to be modelled. A subject reaches out and touches an object or a person.
An example of a subject's hand velocity (subject 1).
Figure 3. An example of a subject's hand velocity (subject 1).
Horizontal velocity of the hand (the horizontal component of the Figure 3).
Figure 4. Horizontal velocity of the hand (the horizontal component of the Figure 3).
Vertical velocity of the hand (the vertical component of the Figure 3).
Figure 5. Vertical velocity of the hand (the vertical component of the Figure 3).
An example of a subject's hand velocity (subject 2).
Figure 6. An example of a subject's hand velocity (subject 2).
An example of a subject's hand velocity (subject 3).
Figure 7. An example of a subject's hand velocity (subject 3).
The model of variation in returning phase of touching motion.
Figure 8. The model of variation in returning phase of touching motion.
“Repliee Q2” android. The left figure is blurred in order to hide the details.
Figure 9. “Repliee Q2” android. The left figure is blurred in order to hide the details.
Android motions generated based on the constructed model.
Figure 10. Android motions generated based on the constructed model.
The objects and persons in the video stimuli.
Figure 11. The objects and persons in the video stimuli.
The procedure to present the video stimuli.
Figure 12. The procedure to present the video stimuli.
The results of questionnaire about impressions towards Androids A and C.
Figure 13. The results of questionnaire about impressions towards Androids A and C.
The results of questionnaire about impressions towards Androids A, B, and C.
Figure 14. The results of questionnaire about impressions towards Androids A, B, and C.
Impressions of participants assessed in order of Androids A, C, and B (Case O1).
Figure 15. Impressions of participants assessed in order of Androids A, C, and B (Case O1).
Impressions of participants assessed in order of Androids C, A, and B (Case O2).
Figure 16. Impressions of participants assessed in order of Androids C, A, and B (Case O2).

Generating Natural Interactive Motion in Android Based on Situation-Dependent Motion Variety

Takashi Minato1 and Hiroshi Ishiguro1

1. Introduction

In order to develop a robot that can work in normal everyday situations, it is necessary to discover the principles relevant to establishing and maintaining social interaction between humans and robots. Even if short-term human-robot interaction can be performed by implementing simple behaviors in a robot, it remains difficult to realize long-term social interaction. We have explored the principles underlying natural human-robot communication by development of an android which closely resembles a human being, which is called an android science approach (Ishiguro, 2005).

Nass et al. (Nass et al., 1994) demonstrated that the human-computer relationship is fundamentally social and that a person's social response toward computers is automatic in social situations. It is inferred from their studies that person's interpersonal responses subconsciously expressed toward a robot (in other words, perceptual social illusion (Jacob & Jeannerod, 2005)) underlie the natural communication between the person and the robot. The condition to elicit interpersonal behavior must be related to a mechanism to support natural communication. The android science approach explores the boundary conditions to elicit subconscious interpersonal behavior toward an android from humans by investigating methods to make the android more humanlike.

Humanlike body motions are necessary to implement humanlike behavior in an android. There have been several studies on the generation of humanlike motion, including studies on a model to generate human motion trajectories based on a neurocomputational approach (Flash & Hogan, 1985; Uno et al., 1989; Kawato, 1992; Schaal & Sternad, 2001), a study on the control of a manipulator based on a model of motion trajectories of a person's arm (Kashima & Isurugi, 1998), and studies on a computer graphics (CG) animated characters, which have shown that noise in the motion makes the character's motion more humanlike (Perlin, 1995; Bodenheimer et al., 1999). These studies successfully generated a humanlike motion. However, a humanlike motion specific to communication situations has not been considered. The present study considers the human-like nature of a person's motion during interaction with other people.

A person generally does not produce exactly identical motion when he/she repeats a behavior with the same intention, as shown in Figure 1(a). In contrast, a robot is able to repeat exactly identical motion with a purpose. A person's motion is diverse in that the motion

media/image1.jpeg

Figure 1.

An example of motion decomposition in a reaching movement.

varies according to noises, mental and physical states, the social situation, and so on, even if the person's intention does not change. We endow an android with motion variety in order to make the android behavior more humanlike. If a person consciously or subconsciously attributes a cause of motion variety in an android motion to such things as the android's mental states, physical states, and the social situations, the person has more humanlike impression toward the android.

We further consider the variety of human motion. We divide a person's motion into the following two components (an example is shown in Figure 1(b)):

  1. A motion that satisfies his/her intention.

  2. A motion change that is not relevant to the intention (a variation of the physical properties of the motion (i) such as its trajectory and velocity).

The motion variety described in the above means a variety in the motion change. Motion generation models involving a signal-dependent noise have been proposed in studies related to the variety of the motion change (Todorov & Jordan, 2002; Miyamoto et al., 2004). This noise-based variety cannot be controlled even if the subject consciously attempts to control or suppress this variety. In contrast, motion variety caused by such things as mental strain or hesitation can be consciously controlled. We assume that the motion variety in an intentional motion influences the human-like nature of the android behavior, even if an observed motion change caused by the variety is small.

The present chapter hypothesizes that the motion change that is not relevant to a subject's intention and can be consciously controlled influences the humanlike impression towards the subject. In particular, we focus on motion variety in an intentional motion caused by the social relationship between the subject and another person. The present chapter concretely

media/image2.jpeg

Figure 2.

Gestures to be modelled. A subject reaches out and touches an object or a person.

takes up the motion of a subject reaching out and touching another person. Even a simple reaching motion of a humanoid robot has not been studied with respect to how the change of its properties due to social situations affects the impression of an observer toward the robot. As an extreme case, the present chapter models the motion difference between two cases in which a subject touches another person or an inanimate object through observing the subject's behavior. We then examine how the presence of the motion variety in an android motion influences the impression toward the android. In a psychological experiment, as a third party, participants watch an android touches a person or an object and report their impressions.

2. A model of human motion variety based on differences in social situation

The present chapter hypothesizes that motion variety in an intentional motion independent of uncontrollable noise contributes to the human-like nature of the motion. In order to examine this hypothesis, we model the motion difference caused by the social relationship between two persons in the motion of one reaching out and touching the other. It is, however, difficult to control the social relationship between two persons in an experiment. As the extreme case, we consider the difference between a person-object relationship (Figure 2, top) and an interpersonal relationship (Figure 2, bottom). The person-object relationship is not social, but this chapter considers it as the least social relationship. We then construct a model of the difference in the subject's arm movements in these two cases.

In order to construct the model, we set up the situations shown in Figure 2 and measured the subject's arm movements with a motion capture system (MAC 3D System, Motion Analysis Corporation). The task of the subject was to reach out with the right hand and touch a box or a female experimenter in front of the subject. The hand position of the subject was measured by attaching a marker to the back of the hand. The sampling rate was 60 Hz. The subject touched the left shoulder, nose, and forehead of the experimenter and two spots on the box, the heights of which are the same as those of the shoulder and forehead (box low and box high). The subjects were seven male students. Some of the subjects were familiar with the female experimenter and others were not. All subject touched the target in the order of box low, box high, left shoulder, nose, and forehead, once for each target (total of thirty-five trials). The subjects were told to touch the target and return their hand to the initial position.

The analysis is not for the purpose of finding differences in motion common to all subjects because the motion variation is caused by individuality in some cases. It is sufficient to find a feature to differentiate a subject-person relationship (interpersonal case) from a subject-object relationship (impersonal case) within a subject. However, if the feature is not common among the subjects, it may be difficult to obtain a common impression towards an android in the later experiment. Therefore, we attempt to find a feature that is shared by the majority of the subjects.

media/image3.jpeg

Figure 3.

An example of a subject's hand velocity (subject 1).

First, we calculated the absolute value of the hand velocity in order to facilitate the analysis. The trajectory of the hand position was smoothed by a low-pass filter, and the velocity was calculated by forward differences. We investigated the difference in the velocity profiles. As an example, the results for a typical subject are shown in Figure 3. Each plot is the absolute value of the velocity of the hand and is shifted in time so that the times of the first peaks are the same. In each plot, the first bell-shaped curve indicates the reaching out motion, and the second bell-shaped curve indicates the returning motion. The following features were found for each subject.

  • The velocity profile in the reaching phase forms a unimodal, bell-shaped curve that does not depend on the relationships (interpersonal and impersonal cases).

  • The velocity profile in the returning phase varies depending on the relationships.

There were no remarkable differences in motion among the subjects that were familiar with the experimenter and the subjects that were not familiar with the experimenter. In order to examine the returning phase in detail, the horizontal and vertical components of the velocity were calculated. Figure 4 and 5 show the absolute value of the horizontal and vertical components, respectively. In all cases, the profile of the vertical component in the returning phase is a single-peak shape. This characteristic is common among all subjects. Moreover,

media/image4.jpeg

Figure 4.

Horizontal velocity of the hand (the horizontal component of the Figure 3).

media/image5.jpeg

Figure 5.

Vertical velocity of the hand (the vertical component of the Figure 3).

media/image6.jpeg

Figure 6.

An example of a subject's hand velocity (subject 2).

media/image7.jpeg

Figure 7.

An example of a subject's hand velocity (subject 3).

the profile of the horizontal component in the returning phase has a peak before the maximum peak. Other examples of the horizontal and vertical components are shown in Figure 6 and 7. We can also find a similar profile of the horizontal component in these examples.

This characteristic appears 6 times among the 14 trials of the impersonal case and 18 times among the 21 trials of the interpersonal case. In other words, this characteristic appears more often in the interpersonal case than in the impersonal case. Although there is no statistically significant difference between the two cases because the number of the subjects is not sufficient, we focus on this feature in order to differentiate the interpersonal and impersonal cases. Comparing the horizontal and vertical components, the time to start increasing the vertical velocity is always later than the time to start increasing the horizontal velocity. There is a tendency for this time delay to be larger in interpersonal cases than in impersonal cases. These results suggest that, in the interpersonal case, when the subjects returned their hands, they moved their hands horizontally at first and then brought their hands down, whereas, in the impersonal case, subjects brought their hands down from the beginning. It is generally thought that a person moves his/her arm by controlling his/her

media/image8.jpeg

Figure 8.

The model of variation in returning phase of touching motion.

hand position initially with feedback control and then moves his/her arm in a ballistic trajectory. This difference in motion can be modelled as the difference of desired hand position of feedback control in the space close to another person (Figure 8). To put it more concretely:

  • In the impersonal case, the desired hand position is set such that the hand can be returned in the fastest path (Figure 8, top).

  • In the interpersonal case, the desired hand position is set such that the hand can move from the space in proximity to the other person along the fastest path (Figure 8, bottom).

Although this model is specific to the motion for touching another person or a box, it can be taken as a model of human motion variety due to differences in social situation. In the next section, we examine the influence of the model on the impression towards an android.

3. Experiments

3.1. Repliee Q2 android

The android (called Repliee Q2) used in the experiment is shown in Figure 9. The android is modelled after a Japanese woman, the standing height of which is approximately 160 cm.

The skin is composed of a kind of silicone that feels like human skin. The android is driven by pneumatic actuators that give it 42 degrees of freedom from the waist up. The legs and feet are not powered. The android can neither stand up nor move from a chair. The joints driven by the pneumatic actuator has mechanical flexibility in the control thanks to the high compressibility of air. The flexibility of the joints makes for safer interaction, with movements that are generally smoother than those of other similar systems. The complicated dynamics of the air actuator make executing the trajectory tracking control difficult.

media/image9.jpeg

Figure 9.

“Repliee Q2” android. The left figure is blurred in order to hide the details.

media/image10.jpeg

Figure 10.

Android motions generated based on the constructed model.

3.2. Method

We implemented the motion variation based on the proposed model in the Repliee Q2 android and investigated the impression toward the android from a third-person viewpoint in psychological experiments. We showed video recordings of the android motions to participants and asked them impressions toward the android. The android motions generated based on the proposed model are shown in Figure 10. The top of the figure shows the motion in which the hand is returned along the fastest path (hereafter, motion M1). The bottom panel shows the motion in which the hand leaves from the space near the other person along the fastest path (hereafter, motion M2). The motion of reaching out was implemented according to the average motion of the subjects' reaching motions observed in the experiment described in Section 2. It is difficult to implement a quick and smooth motion in the android with a simple feedback control because the joints are driven by flexible pneumatic actuators. In order to avoid this difficulty, we implemented the motions in the android so that the speed of motion is slow. The video stimuli were made by playing videos of the android with slow motion at fast speed.

In order to examine the influence of the motion variety on the impression toward the android, we prepared three types of android, as shown in Table 1. Three androids reach out and touch persons and inanimate objects in different manners. The android in Condition A (hereafter, android A) touches persons and objects with motion M1. The android in Condition B (hereafter, android B) touches persons and objects with motion M2. The android in Condition C (hereafter, android C) touches objects with motion M1 and persons with motion M2. Conditions A and B are used to examine the impressions with respect to the androids without motion variety, and Condition C is used to examine the impressions with respect to the androids with motion variety. When the target is a person, the android touches the left shoulder of the person sitting in a face-to-face position.

Android AAndroid BAndroid C
A motion with which the android touches an objectMotion M1Motion M2Motion M1
A motion with which the android touches a personMotion M1Motion M2Motion M2

Table 1.

The experimental conditions.

In each condition, a participant was presented six android motions to report the impression. The android touches three objects (a calendar, a video camera, and a small shelf) and three male persons once for each target. The six targets are shown in Figure 11. The video stimulus is synthesized from a video recording of the android motion without the target and a video recording of the only target. The video of each motion is five seconds long. In each condition, six motions were randomly presented to a participant with a constraint in which the motion of touching an object and the motion of touching a person were alternately presented. In order to eliminate memory effect and aftereffect, a blank image was presented for two seconds between the videos, as shown in Figure 12.

We designed a questionnaire using a five-point Likert scale with 1 = strongly disagree, 3 = neutral, and 5 = strongly agree. The aim of the questionnaire is to ask the impression of the android's human-like nature; therefore, the questionnaire asked how the android is “humanlike.” It is, however, possible that the variation in the arm trajectories does not influence the impression on the human-likeness. We then prepared other six items in the

media/image11.jpeg

Figure 11.

The objects and persons in the video stimuli.

media/image12.jpeg

Figure 12.

The procedure to present the video stimuli.

questionnaire, which are likely influenced by the variation in the arm trajectories. The items are ``the android is (1) polite, (2) accurate, (3) intellectual, (4) conscientious, (5) friendly, (6) graceful, and (7) humanlike.'' The items are listed in random order in order to avoid the order effect, except for “humanlike,” which always appears at the end of the questionnaire, because an answer to the item “humanlike” is likely to influence the responses to the other items.

3.3. Experiment 1: comparing Androids A and C

At first, we compared Androids A and C in order to investigate the influence of the presence of motion variety in the android. The expectation is that the comparison of the impressions of the human-like nature results in the following:

Android C > Android A.

media/image13.jpeg

Figure 13.

The results of questionnaire about impressions towards Androids A and C.

media/image14.jpeg

Figure 14.

The results of questionnaire about impressions towards Androids A, B, and C.

The participants were twenty-four university students (nineteen males and five females) who were familiar with the Repliee Q2 android. Each participant participated in both conditions, although the order of the conditions was changed randomly. The participant answered the questionnaire after every condition was presented.

The average scores of the impressions are shown in Figure 13. A paired t-test revealed significant differences (p<0.05) in two conditions for the items of “polite” and “graceful,” although there was no significant difference for the item “humanlike.” The android behavior in which the hand quickly moves from the space in proximity to the other person likely gave an impression that the android carefully and deliberately touches the other person. It is inferred that the participants thought that the android had an intention of touching the other person carefully. It is, therefore, that the participants thought that Android C is politer and more graceful. It can be also said that the participants were consciously or subconsciously aware that Android C changed the hand trajectory from the fact that there are significant differences for some items.

Here, we consider the difference between Androids A and C. In Condition C, the android's arm trajectory varies according to the android-target relationship, and in Condition A, the android's arm trajectory does not vary according to the android-target relationship. Another difference is that, in Condition C, the android shows motion M2, which is not the case in Condition A. In other words, there is a possibility that the presence of motion M2 produced different impressions. In order to show that the different impressions are caused by the difference in social situation, it is necessary to examine the influence of motion M2. Therefore, in the next section, we conducted an additional experiment to assess the android in Condition B.

3.4. Experiment 2: comparing Androids A, B, and C

The participants in this experiment were twelve of twenty-four participants who participated in experiment 1. They were nine males and three females. Each participant was presented Android B and answered the questionnaire about it. The expectation is that the comparison of the impressions of the human-like nature results in the following:

Android C > Android A, Android B.

The average scores of the impressions toward Android B are shown in Figure 14 by adding the result to Figure 13. Ryan's multiple comparison test revealed a significant difference (p<0.05) between Androids A and C for the item “graceful.” This is the same result as the one obtained in the experiment 1. Contrary to our expectation, however, there was no significant difference between Androids B and C.

We then considered the influence of the order of stimulus presentation. The participants of the experiment 2 assessed in order of Androids A, C, and B or C, A, and B. Hereinafter, Case O1 and Case O2 indicate the order of ACB and CAB, respectively. A repeated measures two-way ANOVA with a factor of condition order and a factor of android motion was conducted. There were significant interactions at the 5% level for the items of “intellectual”, “friendly”, and “humanlike” and at the 10% level for the item of “conscientious.” It is possible that the effect of the android motion on the impression score depends on the order of conditions.

We divided the twelve participants into participants who participated in Case O1 (seven persons) and participants who participated in Case O2 (five persons) and analyzed their impression scores. The average impression scores obtained in Cases O1 and O2 are shown in Figure 15 and 16, respectively. For each item, three conditions are rearranged in the order of presentation. Two tendencies can be seen in these figures:

  • The scores obtained in the condition right after Condition C are smaller than those in Condition C.

  • The scores obtained in the condition right after Condition A are larger than those in Condition A.

Ryan's multiple comparison test revealed several significant differences at 5% level among Androids A, B, and C as shown in Figure 15 and 16. In particular, as expected, the score of “humanlike” for Android C is significantly larger than that for Android B in Case O1. In Case O2, the participants thought the android which touched anything with the motion M2 was more deliberate and careful than the android which touched anything with the motion M1. Furthermore, it is likely that this careful motion gave an impression that Android B was more humanlike than Android A in Case O2. However, the participants thought that Android C with motion variation was more humanlike than Android B when Android C was presented just after Android B in Case O1. It is possible that the participants think the android with the motion variation is more humanlike than the android with only the careful motion.

3.5. Summary

The experimental results showed that the variety of the android motion enhances the impression of human-like nature toward the android under the influence of the order of stimulus presentation, although the expected result (i.e., Android C is more humanlike than Androids A and B) was not obtained. In addition, the results showed that motion variety influences impressions such as “conscientious” and “graceful”, which are related to the human-like nature of the android. The number of participants of the experiments was too few to compare the three conditions. The expected effect of the motion variety may be shown by an experiment with a larger number of participants.

As an example of motion variety, the present chapter examined the motion variation in which the desired hand position of feedback control varies in two ways when a person returns his/her hand after touching a target. In addition, the social relationship which causes this variation was also designed to be varied in two cases, that is, android-person and android-object relationships. This is a simple example of variety. However, more complicated motion variation can be designed, for example, by changing the causes of the variation. It is inferred that complicated variation has a different influence on the impression, although there are appropriate variations for enhancing the human-like nature. Further investigation is necessary in order to clarify what motion variety makes the android humanlike.

In Section 2, we assumed that the variation of the subject's arm motion is caused by the social relationship between the subject and the target. However, the subconscious motion variation was not verified to be due to the social situation. There is another possibility, i.e., that the variation is, for example, due to the hardness of the target, such as a hard box or a soft human body. In addition, it was not verified that the participants in Section 3 actually attributed the cause of motion variation to the social situation, although the motion variation conditionally enhanced the impression of the android's human-like nature. In other words, it is not clear that the participants think Android C socially behaves like

media/image15.jpeg

Figure 15.

Impressions of participants assessed in order of Androids A, C, and B (Case O1).

media/image16.jpeg

Figure 16.

Impressions of participants assessed in order of Androids C, A, and B (Case O2).

human beings. One possible design of an experiment is to compare with the android which has same motions but different motion variation, that is, the android which touches objects with motion M2 and persons with motion M1 (this manner is opposite to Android C). If this android is less humanlike than Android C, the motion variation which is congruent with that of human subjects shown in Section 2 contributes the human-likeness of the android. However, further investigation is necessary to verify whether the social relationship caused the arm motion variation observed in Section 2 and the different impressions toward the android obtained in Section 3.

4. Conclusion

We hypothesized that a motion variety that is not related to a subject's intention and can be consciously controlled influences the humanlike impression of the subject, and we assumed that this motion variety makes the android more humanlike. In order to verify this hypothesis, we constructed a model of the motion variety through the observation of persons’ motions. We examined the variation in a motion of reaching out and touching another person, which occurred in different social relationships between the subject and the other person (or object). The experimental results showed that the modelled motion variety conditionally influences the impression toward the android.

The results of the present chapter are specific to the android's motion of reaching out and touching a person. The present study is a first step in the exploration of the principles for providing natural robot behaviors. The results revealed that a phenomenon whereby motion variety influences the impression towards the actor can be seen at least in certain motions of a very humanlike robot. Based on these results, it is possible to examine which aspects of the robot's appearance and motion are affected by this phenomenon. This exploration will help to clarify the principles underlying natural human-robot communication.

From the viewpoint of the robot motion design, a motion variety model is also useful. Several studies have proposed a method by which to implement humanlike motion in a humanoid robot by copying human motion as measured by a motion capture system to the robot (Riley et al., 2000; Nakaoka et al., 2003; Matsui et al. 2005). In order to make a robot motion more humanlike, it is necessary to implement a humanlike motion variation. However, it is not necessary to copy all human motions. This humanlike motion variation can be automatically generated from an original motion by the motion variety model.

5. Acknowledgements

The android robot Repliee Q2 was developed in collaboration with Kokoro Company, Ltd.

References

1 - B. Bodenheimer, A. V. Shleyfman, J. K. Hodgins, 1999 The effects of noise on the perception of animated human running, Computer Animation and Simulation’99: Proceedings of the Eurographics Workshop, 53 63 , 978-3-21183-392-6 Milano, Italy, Sep., 1999, Springer-Verlag.
2 - T. Flash, N. Hogan, 1985 The coordination of arm movements: An experimentally confirmed mathematical model, Journal of Neuroscience, 5 7 1688 1703 , 1985, 0270-6474.
3 - H. Ishiguro, 2005 Android science-toward a new cross-interdisciplinary framework, Proceedings of the 12th International Symposium of Robotics Research, San Francisco, USA, Oct., 2005.
4 - P. Jacob, M. Jeannerod, 2005 The motor theory of social cognition: a critique. Trends in Cognitive Sciences, 9 1 21 25 , 2005, 1364-6613.
5 - T. Kashima, Y. Isurugi, 1998 Trajectory formation based on physiological characteristics of skeletal muscles, Biological Cybernetics, 78 6 413 422 , 1998, 0340-1200.
6 - M. Kawato, 1992 Optimization and learning in neural networks for formation and control of coordinated movement, Attention and performance XIV, 821 849 , 978-0-26213-284-8 1992, MIT Press.
7 - D. Matsui, T. Minato, Dorman. K. F. Mac, H. Ishiguro, 2005 Generating natural motion in an android by mapping human motion, Proceedings of the IEEE/RSJ International Conference on Intelligent Robot Systems, 1089 1096 , 0-78038-912-3 Alberta, Canada, Aug., 2005.
8 - H. Miyamoto, E. Nakano, D. M. Wolpert, M. Kawato, 2004 Tops (task optimization in the presence of signal-dependent noise) model. Systems and Computers in Japan, 35 11 48 58 , 2004, 0882-1666.
9 - S. Nakaoka, A. Nakazawa, K. Yokoi, H. Hirukawa, K. Ikeuchi, 2003 Generating whole body motions for a biped humanoid robot from captured human dances, Proceedings of the IEEE-RAS International Conference on Robotics and Automation, 3905 3910 , 0-78037-737-0 Taiwan, Sep., 2003.
10 - C. Nass, J. Steuer, E. Tauber, 1994 Computers are social actors, Proceedings of the ACM Conference on Human Factors in Computing Systems, 72 78 , 0-89791-651-4 Massachusetts, USA, Apr., 1994.
11 - K. Perlin, 1995 Real time responsive animation with personality, IEEE Transactions on Visualization and Computer Graphics, 1 1 5 15 , 1995, 1077-2626.
12 - M. Riley, A. Ude, C. G. Atkeson, 2000 Methods for motion generation and interaction with a humanoid robot: Case studies of dancing and catching, Proceedings of AAAI/CMU Workshop on Interactive Robotics and Entertainment, 35 42 , Pittsburgh, Pennsylvania, USA, Apr., 2000.
13 - S. Schaal, D. Sternad, 2001 Origins and violations of the 2/3 power law in rhythmic 3d movements, Experimental Brain Research, 136 1 60 72 , 2001, 0014-4819.
14 - E. Todorov, M. I. Jordan, 2002 Optimal feedback control as a theory of motor coordination, Nature Neuroscience, 5 11 1226 1235 , 2002, 1097-6256.
15 - Y. Uno, M. Kawato, R. Suzuki, 1989 Formation and control of optical trajectory in human multi-joint arm movement- minimim torque-change model, Biological Cybernetics, 61 2 89 101 , 1989, 0340-1200.