Open access

Making a Mobile Robot to Express its Mind by Motion Overlap

Written By

Kazuki Kobayashi and Seiji Yamada

Published: 01 December 2009

DOI: 10.5772/6829

From the Edited Volume

Advances in Human-Robot Interaction

Edited by Vladimir A. Kulyukin

Chapter metrics overview

1,928 Chapter Downloads

View Full Metrics

1. Introduction

Various home robots like sweeping robots and pet robots have been developed, commercialized and now are studied for use in cooperative housework (Kobayashi & Yamada, 2005). In the near future, cooperative work of a human and a robot will be one of the most promising applications of Human-Robot Interaction research in factory, office and home. Thus interaction design between ordinary people and a robot must be very significant as well as building an intelligent robot itself. In such cooperative housework, a robot often needs users’ help when they encounter difficulties that they cannot overcome by themselves. We can easily imagine many situations like that. For example, a sweeping robot can not move heavy and complexly structured obstacles, such as chairs and tables, which prevent it from doing its job and needs users’ help to remove them (Figure 1). A problem is how to enable a robot to inform its help requests to a user in cooperative work. Although we recognize that this is a quite important and practical issue for realizing cooperative work of a human user and a robot, a few studies have been done thus far in Human-Robot Interaction. In this chapter, we propose a novel method to make a mobile robot to express its internal state (called robot’s mind) to request users’ help, implement a concrete expression

Figure 1.

A robot which needs user’s help.

on a real mobile robot and conduct experiments with participants to evaluate the effectiveness.

In traditional user interface design, some studies have proposed design for electric home appliances. Norman (Norman, 1988) addressed the use of affordance (Gibson, 1979) in artifact design. Also Suchman (Suchman, 1987) studied behavior patterns of users. Users' reaction to computers (Reeves & Nass, 1996) (Katagiri & Takeuchi, 2000) is important to consider as designing artifacts. Yamauchi et al. studied function imagery of auditory signals (Yamaguchi & Iwamiya, 2005), and JIS (Japanese Industrial Standards) provides guidelines for auditory signals in consumer products for elderly people (JIS, 2002). These studies and guidelines deal with interfaces for artifacts that users operate directly themselves. These methods and guidelines assume use of an artifact directly through user control: an approach that may not necessarily work well for home robots that conduct tasks directly themselves. Robot-oriented design approaches are thus needed for home robots.

As mentioned earlier, our proposal for making a mobile robot to express its mind assumes cooperative work in which the robot needs to notify a user how to operate it and move objects blocking its operation: a trinomial relationship among the user, robot, and object. In a psychology field, the theory of mind (TOM) (Baron-Cohen, 1995) deals with such trinomial relationships. Following TOM, we term a robot's internal state mind, defined as its own motives, intents, or purposes and goals of behavior. We take weak AI (Searle, 1980) position: a robot can be made to act as if they had a mind.

Mental expression is designed verbally or nonverbally. If we use verbal expression, for example, we can make a robot to say “Please help me by moving this obstacle.” In many similar situations in which an obstacle prevents a robot from moving, the robot may simply repeat the same speech because it cannot recognize what the obstacle is. A robot neither say “Please remove this chair” nor “Please remove this dust box”. Speech conveys a unique meaning, and such repetition irritates users. Hence we study nonverbal methods such as buzzers, blinking lights, and movement, which convey ambiguous information that users can interpret as they like based on the given situation.

We consider that the motion-based approach feasibly and effectively conveys the robot's mind in an obstacle-removal task. Movement is designed based on motion overlap (MO) that enable a robot to move in a way that the user narrows down possible responses and acts appropriately. In an obstacle-removal task, we had the robot move back and forth in front of an obstacle, and conducted experiments compared MO to other nonverbal approaches. Experimental results showed that MO has potential in the design of robots for the home.

We assume that a mobile robot has a cylindrical body and expresses its mind through movement. This has advantages for developers in that a robot needs no component such as a display or a speech synthesizer, but it is difficult for the robot to express its mind in a humanly understandable manner. Below, we give an overview of studies on how a robot can express its mind nonverbally with human-like and nonhuman-like bodies.

Hadaly-2 (Hashimoto et al., 2002), Nakata's dancing robot (Nakata et al., 2002), Kobayashi's face robot (Kobayashi et al., 2003), Breazeal's Kismet (Breazeal, 2002), Kozima's Infanoid (Kozima & Yano, 2001), Robovie-III (Miyashita & Ishiguro, 2003), and Cog (Brooks et al., 1999) utilized human-like robots that easily express themselves nonverbally in a human understandable manner. The robot we are interested in, however, is nonhuman-like in shape, only having wheels for moving. We designed wheel movement to enable the robot to express its mind.

Ono et al. (Ono et al., 2000) studied how a mobile robot's familiarity influenced a user's understanding of what was on its mind. Before their experiments, participants were asked to grow a life-like virtual agent on a PC, and the agent was moved to the robot's display after the keeping. This keeping makes the robot quite familiar to a user, and they experimentally show that the familiarity made a user’s accuracy of recognising robot’s noisy utterance quite better. Matsumaru et al. (Matsumaru et al., 2005) developed a mobile robot that expresses its direction of movement with a laser pointer or animated eye. Komatsu (Komatsu, 2005) reported that users could infer the attitude of a machine through its beeps. Those require extra components in contrast with our proposal. The orca-like robot (Nakata et al., 1998), seal-like Paro (Wada et al., 2004)(Shibata et al., 2004), and limbless Muu (Okada et al., 2000) are efforts of familiarizing users with robots. Our study differs from these, however, in that we assume actual cooperative work between the user and robot, such as cooperative sweeping.

Advertisement

2. Expression of robot mind

The obstacle-removal task in which we have the robot express itself in front of an obstacle and how the robot conveys what is on its mind are explained below.

2.1. Obstacle-removal task

The situation involves a sweeping robot can not remove an obstacle, such as a chair and a dust box, that asks a user to remove it so that it can sweep the floor area where the obstacle occupied (Figure 1). Such an obstacle-removal task serves as a general testbed for our work because it occurs frequently in cooperative tasks between a user and a robot. To execute this task, the robot needs to inform its problem to the user and ask for help. This task has been used in research on cooperative sweeping (Kobayashi & Yamada, 2005).

Obstacle-removal tasks generally accompany other robot tasks. Obstacle avoidance is essential to mobile robots such as tour guides (Burgard et al., 1998). Obstacles may be avoided by having the robot (1) avoid an obstacle autonomously, (2) remove the obstacle autonomously, or (3) get user to remove the obstacle. It is difficult for a robot to remove an obstacle autonomously because it first must decide whether it may touch the object. In practical situations, the robot avoids an obstacle either by autonomous avoidance or having a user remove it.

2.2. Motion overlap

Our design, motion overlap, starts when movement routinely done by a user is programmed into a robot. A user observing the robot's movement will find an analogy to human action and easily interprets the state of mind. We consider the overlap between human and robot’s movement causes an overlap between the minds of the user and the robot (Figure 2).

A human is neither a natural light emitter nor expresses his/her intention easily using nonverbal sounds. They do, however, move expressively when executing tasks. We therefore presume that a user can understand a robot's mind as naturally as another person's mind if robot movement overlaps recognizable human movement. This human understanding has been studied and reported in TOM.

As described before, nonverbal communication has alternative modalities: a robot can make a struggling movement, sound a buzzer, or blink a light. We assume movement to be better for an obstacle-removal task for the following reasons.

Figure 2.

Motion overlap.

  • Feasibility: Since a robot needs to move for achieving tasks, so a motion-based approach requires no additional component such as a LED or a speaker. The additional nonverbal components make a robot quite more complicated and expensive.

  • Variation: The motion-based approach enables us to design informational movement to suit different tasks. The variety of movements is far larger than that of sounds or light signals of other nonverbal methods.

  • Less stress: Other nonverbal methods, particularly sound, may force a user to strong attention at a robot, causing more stress than movement. The motion-based approach avoids distracting or invasive interruption of a user who notices the movement and chooses whether or not to respond.

  • Effectiveness: Motion-based information is intuitively more effective than other nonverbal approaches because interesting movement attracts a user to a robot without stress.

While feasibility, variety, and stress minimization of motion-based information are obviously valid, we need to verify effectiveness needs to be verified experimentally.

2.3. Implementing MO on a mobile robot

We designed robot's movements which a user can easily understand by imagining what a human may do when he/she faces with an obstacle-removal task. Imagine that you see a person who has baggage and hesitates nervously in front of a closed door. Almost all the human observers would immediately identify the problem that the person needs help to open the door. This is a typical situation in TOM. Using similar hesitation movement could enable a robot to inform a user that it needs help.

A study on human actions in doing tasks (Suzuki & Sakai, 2001) defines hesitation as movement that suddenly stops and either changes into other movement or is suspended: a definition that our back and forth movement fits (Figure 3). Seeing a robot moves back and forward in a short time in front of an obstacle should be easy for a user because a human acts similarly when they are in the same trouble.

Figure 3.

Back and forth motion.

We could have tested other movement such as turning to the left and right, however back and forth movement keeps the robot from swerving from its trajectory to achieve a task. It is also easily applicable to other hardware such as manipulators. Back and forth movement is thus appropriate for an obstacle-removal task in efficiency of movement and range of application.

Advertisement

3. Experiments

We conducted experiments to verify the effectiveness of our motion-based approach in an obstacle-removal task, comparing the motion-based approach to two other nonverbal approaches.

3.1. Environments and a robot

Figure 4 shows the flat experimental environment (400mm X 300mm) surrounded by a wall and containing two obstacles (white paper cups). It simulated an ordinary human work space such as a desktop. Obstacles corresponded to penholders, remote controllers, etc., and are easily moved by participants. We used a small mobile robot, KheperaII (Figure 5), which has eight infrared proximity and ambient light sensors with up to a 100mm range, a Motorola 68331 (25 MHz) processor, 512K bytes of RAM, 512K bytes of flash ROM, and two DC brushed servomotors with incremental encoders. Its C program runs on RAM.

3.2. Robot’s expressions

Participants observed the robot as it swept the floor in the experimental environment. The robot used ambiguous nonverbal expressions enabling participants to interpret them based on the situation. We designed three types of signals to inform the robot's mind to sweep the area under an obstacle or the wish for wanting user’s help to remove the obstacle. It expressed by itself using one of the three following types of signals:

Figure 4.

An experimental environment.

Figure 5.

KheperaII.

  • LED: The robot's red LED (6 mm in diameter) blinks based on ISO 4982:1981 (automobile flasher pattern). The robot turns the light on and off based on the signal pattern in Fig. 6, repeating the pattern twice every 0.4 second.

  • Buzzer: The robot beeps using a buzzer that made a sound with 3 kHz and 6 kHz peaks. The sound pattern was based on JIS:S0013 (auditory signals of consumer products intended for attracting immediate attention). As with the LED, the robot beeps at “on” and ceases at “off” (Fig. 6).

  • Back and forth motion: The robot moves back and forward, 10 mm back and 10 mm forth based on “on” and “off” (Fig. 6).

Figure 6.

Pattern of Behavior.

The LED, buzzer, and movement used the same “on” and “off” intervals. The robot stopped sweeping and performed each when it encountered an obstacle or wall, then turned left or right and moved ahead. If the robot senses an obstacle on its right (left), it makes a 120 degree turn to the left (right), repeating these actions during experiments. Note that the robot did not actually sweep up dust.

3.3. Methods

Participants were instructed that the robot represented a sweeping robot, even though it actually did not sweep. They were to imagine that the robot was cleaning the floor. They could move or touch anything in the environment, and were told to help the robot if it needed it.

Each participant conducted three trials and observed the robot moved back and forth, blinked its lights, or sounded its buzzer. The order of expressions provided to participants was random. A trial finished after the robot's third encounter with obstacles, or when the participant removed an obstacle. The participants were informed no information and interpretation about the robot's movement, blinking, or sounding.

Figure 7 details experimental settings that include the robot's initial locations and those of objects. At the start of each experiment, the robot moved ahead, stopped in front of a wall, expressed its mind, and turned right toward obstacle A. Figure 8 shows a series of snapshots in which a participant had interaction with a mobile robot doing back and forth. The participant sat on the chair and helped the robot on the desk.

The participants numbered 17: 11 men and six women aged 21-44 including 10 university students and seven employees. We confirmed that they had no experience in interacting with robots before.

Figure 7.

Derailed experimental setup.

Figure 8.

MO experiments.

3.4. Evaluation

We used the criterion that fewer expressions were better because this would help participants understand easily what was on the robot's mind. The robot expressed itself whenever it encountered a wall or an obstacle. We counted the number of participants who moved the object just after the robot's first encounter with the object. We considered using other measurement such as the period from the beginning of the experiment to when the participant moved an obstacle, however this was difficult because the time at which the robot reached the first obstacle was different in each trial. Slippage of the robot's wheels changed its trajectory.

3.5. Results

Table 1 shows participants and behavior in the experiments. The terms with asterisks are trials in which a participant removed an obstacle. Eight of 17 participants (47%) did not move any obstacle in any experimental condition. Table 2 shows ratios of participants moving the obstacle under each condition. The ratios increased with the number of trial. This appeared more clearly under the MO condition.

ID Age Gender Trial-1 Trial-2 Trial-3
1 25 M LED* Buzzer* MO*
2 30 M Buzzer MO LED
3 24 M MO LED Buzzer
4 25 M LED* MO* Buzzer*
5 23 M Buzzer* LED MO*
6 43 F MO LED Buzzer
7 27 M LED Buzzer MO*
8 29 F LED MO* Buzzer*
9 44 F Buzzer MO* LED*
10 26 F Buzzer LED MO*
11 29 F MO Buzzer LED
12 27 M LED Buzzer MO*
13 36 M MO LED Buzzer
14 27 M Buzzer LED MO
15 26 M Buzzer* MO* LED*
16 26 M MO Buzzer LED
17 21 F LED Buzzer MO

Table 1.

Participant behaviors.

Table 2.

Expressions and trials.

Figure 9 shows ratios of participants who moved the obstacle immediately after the robot's first encounter with it. More participants responded to MO than to either the buzzer or light. We

Figure 9.

Ratios of participants who moved an object.

statistically analyzed the differences in ratios among the three methods. The result of the statistical test (Cochran's Q test) showed significant differences among methods (Q = 7.0, df = 2.0, p<.05). We conducted a multiple comparison test, Holm's test, and obtained 10% level differences between MO-LED (Q = 5.0, df = 1.0, p = 0.0253, ' = 0.0345, ' is the modified significant level by Holm's test) and MO-buzzer (Q = 4.0, df = 1.0, p = 0.0455, ' = 0.0513), indicating that MO is as effective or more effective than the other two methods.

In the questionnaire on experiments (Table 3), most participants said they noticed the robot's action. Table 4 shows results of the questionnaire. We asked participants why they moved the object. The purpose of our design policy corresponds to question (1). More people responded positively to question (1) for the cases of the buzzer and MO. MO achieved our objective because it caused the most participants to move the object.

Advertisement

4. Discussion

We discuss the effectiveness and application of MO based on experimental results.

4.1. Effectiveness of MO

We avoided using loud sounds or bright lights because they are not appropriate for a home robot. We confirmed that participants correctly noticed the robot's expression. Results of the questionnaires in Table 3 show that the expressions we designed were appropriate for experiments.

Table 3.

The number of participants who noticed the robot’s expression.

MO is not effective in any situation because Table 2 suggests the existence of a combination effect. Although the participants experienced MO in previous experiments, only 40% of them moved the obstacle in the LED-Trial3 and Buzzer-Trial3 conditions. In the MO-Trial1 condition, no participants moved the obstacle. Further study of the combination effect is thus important.

We used specific lighting and sound patterns for expressing the robot's mind, however the effects of other patterns are not known. For example, a different frequency, complex sound pattern may help a user to understand the robot's mind more easily. The expressive patterns we investigated through these experiments were just a small part of huge candidates. A more organized investigation on light and sound is thus necessary to find the optimal pattern. Our results show that conventional methods are not sufficient and that MO shows promise.

Questionnaire results (Table 4) show that many participants felt that the robot “wanted” them to move the obstacle or moved it depending on the situation. The “wanted” response reflects anthropomorphization of the robot. The “depending on the situation” response may indicate that they identified with the robot's problem. As Reeves & Nass (Reeves & Nass, 1996) and Katagiri & Takeuchi (Katagiri & Takeuchi, 2000) have noted participants exhibiting interpersonal action with a robot would not report the appropriate reason, so questionnaire results are not conclusive. However MO may encourage users to anthropomorphize robots.

Table 4.

Results of the questionnaire.

Table 4 compares MO and the buzzer, which received different numbers of responses. Although fewer participants moved the obstacle after the buzzer than after MO, the buzzer had more responses in the questionnaires. The buzzer might offer highly ambiguous information in the experiments. The relationship between the degrees of ambiguity and expression is an important issue in designing robot behavior.

4.2. Coverage of MO

Results for MO were more promising results than for other nonverbal methods, however are these results general? Results directly support the generality of obstacle-removal tasks. We consider that an obstacle-removal task is a common subtask in human-robot cooperation. For other tasks without obstacle-removal, we may need to design another type of MO-based informative movement. The applicable scope for MO is thus an issue for future study.

Morris's study of human behavior suggests the applicability of MO (Morris, 1977). Morris states that human beings sometimes move preliminarily before taking action, and these preliminary movements indicate what they will do. A person gripping the arms of a chair during a conversation may be trying to end the conversation but does not wish to be rude in doing so. Such behavior is called an intention movement and two movements with their own rhythm, such as left-and-right rhythmic movements on a pivot chair, are called alternating intention movement. Human beings easily grasp each other's intent in daily life. We can consider the back and forth movement to be a form of alternating intention movement meaning that the robot wants to move forward but cannot do so. Participants in our experiments may have interpreted the robot's mind by implicitly considering its movements as alternating intention movement. Although the LED and buzzer rhythmically expressed itself, they may have been less effective than MO. Participants may not have considered them as intention movement because they were not preliminary movement --- sounding and blinking were not related to previous movement, moving forward.

If alternating intention movement works well in enabling a robot to inform a user about its mind, the robot will be able to express itself with other simple rhythmic movements, e.g., the simple left and right movements to encourage the user to help it when it loses the way. Rhythmic movement is hardware-independent and easily implemented. We believe that alternating intention movement is an important element in MO applications, and we plan to study this and evaluate its effectiveness. A general implementation for expressing robot's mind can be established through such investigations. The combination of nonverbal and verbal information is important for robot expression, and we plan to study ways to combine different expression to speed up interaction between users and robots.

4.3. Designing manual-free machines

A user needs to read the manuals of their machines or want to use them more conveniently. However, reading manuals imposes workload on the user. It would be better for a user to discover a robot's functions naturally, without reading a manual. The results of our experiments show that motion-based expression enables a user to understand the robot’s mind easily. We thus consider motion-based expression to be useful for making manual-free machines, and we currently devising a procedure for users to discover robot's functions naturally.

The procedure is composed of three steps: (1) expression of the robot's mind, (2) responsive action of its user, and (3) reaction of the robot. The robot's functions are “discovered'' when the user causality links his/her actions with the robot's actions. Our experiments show that the motion-based approach satisfies step (1) and (2) and helps humans to discover such causality relations.

Advertisement

5. Conclusion

We have proposed a motion-based approach for nonverbally informing a user of a robot's state of mind. Possible nonverbal approaches include movement, sound, and lights. The design we proposed, called motion overlap, enabled a robot to express human-like behavior in communicating with users.

We devised a general obstacle-removal task based on motion overlap for cooperation between a user and a robot, having the robot move back and forth to show the user that it wants an obstacle to be removed.

We conducted experiments to verify the effectiveness of motion overlap in the obstacleremoval task, comparing motion overlap to sound and lights. Experimental results showed that motion overlap encouraged most users to help the robot.

The motion-based approach will effectively express robot's mind in an obstacle-removal task and contribute to design of home robots. Our next step in this motion overlap is to combine different expressions to speed up interaction between users and robots, and to investigate other intentional movement as extension of motion overlap.

References

  1. 1. Baron-Cohen S. 1995 Mindblindness: An Essay on Autism and Theory of Mind, MIT Press.
  2. 2. Breazeal C. 2002 Regulation and entrainment for human-robot interaction, International Journal of Experimental Robotics, 21, 11-12, 883 EOF 902 EOF .
  3. 3. Brooks R. Breazeal C. Marjanovic M. Scassellati B. Williamson M. 1999 The Cog Project: Building a Humanoid Robot, In: Computation for Metaphors, Analogy and Agent, Lecture Notes in Computer Science, Nehaniv, C. L. (Ed.), 1562 52 87 , Springer.
  4. 4. Burgard W. Cremers A. B. Fox D. Hahnel D. Lakemeyer G. Schulz D. Steiner W. Thrun S. 1998 The Interactive Museum Tour-Guide Robot, Proceedings of the 15th National Conference on Artificial Intelligence, 11 18 .
  5. 5. Gibson J. J. 1979 The Ecological Approach to Visual Perception, Lawrence Erlbaum Associates Inc.
  6. 6. Hashimoto S. et al. 2002 Humanoid Robots in Waseda University- Hadaly-2 and WABIAN, Autonomous Robots, 12 1 25 38 .
  7. 7. Japanese Industrial Standards. 2002 JISS0013:2002 Guidelines for the elderly and people with disabilities- Auditory signals on consumer products.
  8. 8. Katagiri Y. Takeuchi Y. 2000 Reciprocity and its Cultural Dependency in Human- Computer Interaction, In: Affective Minds, Hatano, G.; Okada, N. & Tanabe, H. (Eds.), 209 214 , Elsevier.
  9. 9. Kobayashi H. Ichikawa Y. Senda M. Shiiba T. 2003 Realization of Realistic and Rich Facial Expressions by Face Robot, Proceedings of 2003 IEEE/RSJ International Conference on Intelligent Robots and Systems, 1123 1128 .
  10. 10. Kobayashi K. Yamada S. 2005 Human-Robot Cooperative Sweeping by Extending Commands Embedded in Actions, Proceedings of 2005 IEEE/RSJ International Conference on Intelligent Robots and Systems, 1827 1832 .
  11. 11. Komatsu T. 2005 Can we assign attitudes to a computer based on its beep sounds?, Proceedings of the Affective Interactions: The computer in the affective loop Workshop at Intelligent User Interface 2005, 35 37 .
  12. 12. Kozima H. Yano H. 2001 A robot that learns to communicate with human caregivers, Proceedings of International Workshop on Epigenetic Robotics, 47 52 .
  13. 13. Matsumaru T. Iwase K. Akiyama K. Kusada T. Ito T. 2005 Mobile Robot with Eyeball Expression as the Preliminary-Announcement and Display of the Robot’s Following Motion, Autonomous Robots, 18 2 231 246 .
  14. 14. Miyashita T. Ishiguro H. 2003 Human-like natural behavior generation based on involuntary motions for humanoid robots, Robotics and Autonomous Systems, 48 4 203 212 .
  15. 15. Morris D. 1977 Manwatching, Elsevier Publishing.
  16. 16. Nakata T. Mori T. Sato T. 2002 Analysis of Impression of Robot Bodily Expression, Journal of Robotics and Mechatronics, 14 1 27 36 .
  17. 17. Nakata T. Sato T. Mori T. 1998 Expression of Emotion and Intention by Robot Body Movement, Intelligent Autonomous Systems, 5, 352--359.
  18. 18. Norman D. A. 1988 The Psychology of Everyday Things, Basic Books.
  19. 19. Okada M. Sakamoto S. Suzuki N. 2000 Muu: Artificial creatures as an embodied interface, Proceedings of 27th International Conference on Computer Graphics and Interactive Techniques (SIGGRAPH 2000), the Emerging Technologies, 91
  20. 20. Ono T. Imai M. Nakatsu R. 2000 Reading a Robot’s Mind: A Model of Utterance Understanding based on the Theory of Mind Mechanism, International Journal of Advanced Robotics, 14 4 311 326 .
  21. 21. Reeves B. Nass C. 1996 The Media Equation: How People Treat Computers, Television, and New Media Like Real People and Places, Cambridge University Press.
  22. 22. Searle J. 1980 Minds, brains, and programs, Behavioral and Brain Sciences, 3 3 417 457 .
  23. 23. Shibata T. Wada K. Tanie K. 2004 Subjective Evaluation of Seal Robot in Brunei, Proceedings of IEEE International Workshop on Robot and Human Interactive Communication, 135 140 .
  24. 24. Suchman L.~. A. 1987 Plans and Situated Actions: The Problem of Human-Machine Communication, Cambridge University Press, 1987.
  25. 25. Suzuki K. Sasaki M. 2001 The Task Constraints on Selection of Potential Units of Action: An Analysis of Microslips Observed in Everyday Task (in Japanese), Cognitive Studies, 8 2 121 138 .
  26. 26. Wada K. Shibata T. Saito T. Tanie K. 2004 Psychological and Social Effects in Long- Term Experiment of Robot Assisted Activity to Elderly People at a Health Service Facility for the Aged, Proceedings of 2004 IEEE/RSJ International Conference on Intelligent Robots and System, 3068 3073 .
  27. 27. Yamauchi K. Iwamiya S. 2005 Functional Imagery and Onomatopoeic Representation of Auditory Signals using Frequency-Modulated Tones, Japanese Journal of Physiological Anthropology, 10 3 115 122 .

Written By

Kazuki Kobayashi and Seiji Yamada

Published: 01 December 2009