Open access

Comparison an On-screen Agent with a Robotic Agent in an Everyday Interaction Style: How to Make Users React Toward an On-screen Agent as if They are Reacting Toward a Robotic Agent

Written By

Takanori Komatsu

Published: 01 February 2010

DOI: 10.5772/8133

From the Edited Volume

Human-Robot Interaction

Edited by Daisuke Chugo

Chapter metrics overview

2,139 Chapter Downloads

View Full Metrics

1. Introduction

Communication media terminals, such as PDAs, cell phones, and mobile PCs, are devices that are used globally and are a part of our daily lives. Most people have access to such media terminals. Various interactive agents, such as robotic agents (for example, Gravot et al., 2006; Imai et al., 2003) and embodied conversational agents (ECA) appearing on a computer display (for example, Cassell et al., 2002; Prendinger and Ishizuka, 2004), are now being developed to assist us with our daily tasks. The technologies that these interactive agents can provide will soon be applied to these widespread media terminals.

Some researchers have started considering the effects of these different agents appearing on these media terminals on users’ behaviours and impressions, especially for comparisons of on-screen agents appearing in a computer display with robotic agents (Shinozawa et al., 2004; Powers et al., 2007; Wainer et al, 2006). For example, Shinozawa et al. (2004) investigated the effects of a robot’s and on-screen agent’s recommendations on human decision making. Powers et al. (2007) experimentally compared people’s responses in health interview with a computer agent, a collocated robot and a remote robot projected. And Wainer et al. (2006) measured task performance and participants’ impression of robot’s social abilities in a structured task based on the Towers of Hanoi puzzle.

Actually, most of these studies reported that most users stated that they felt much more comfortable with the robotic agents and that these agents were much more believable interactive partners compared to on-screen agents. In these studies, the participants were asked to directly face the agents during certain experimental tasks. However, this “face-to-face interaction” does not really represent a realistic interaction style with the interactive agents in our daily lives. Imagine that these interactive agents were basically developed to assist us with our daily tasks. It is assumed that the users are engaged in tasks when they need some help from the agents. Thus, it is expected that these users do not look at the agents much but mainly focus on what they are doing. I called this interaction style “an everyday interaction style.” I assumed that this interaction style is much more realistic than the “face-to-face interaction” on which most former studies focused.

I then experimentally investigated the effects of an on-screen agent and a robotic agent on users’ behaviours in everyday interaction styles. And moreover, I discussed the results of this investigation and focused in particular on how to create comfortable interactions between users and various interactive agents appearing in different media.

Advertisement

2. Experiment 1: Comparison of an on-screen agent with a robotic agent

2.1. Experimental setting

I set up an everyday interaction style between users and agents in this experiment by introducing a dual task setting; that is, one task was obviously assigned to participants as a dummy task, and the experimenter observed other aspects of the participants’ behaviours or reactions, of which the participants were completely unaware.

First, the participants were told that the purpose of this experiment was to investigate the computer mouse trajectory while they played the puzzle video game “picross” by Nintendo Co., Ltd. (Fig. 1). The picross game was actually a dummy task for participants. While they were playing the picross game, an on-screen or robotic agent placed in front of and to the right of the participants talked to them and encouraged them to play another game with it. The actual purpose of this experiment was to observe the participants’ behavioural reactions when the agent talked to them.

Figure 1.

“picross

http://www.nintendo.co.jp/n02/shvc/bpij/try/index.html

” puzzle game by Nintendo Co., Ltd.

2.2. An on-screen agent and a robotic agent

The on-screen agent I used was the CG robot software “RoboStudio” by NEC Corporation (the left of Fig. 2), and the robotic agent was the “PaPeRo robot” by NEC Corporation (the right of Fig 2). RoboStudio was developed as simulation software of the PaPeRo, and it was equipped with the same controller used with PaPeRo. Therefore, both the on-screen and the robotic agents could express the same behaviours.

Figure 2.

On-screen agent “RoboStudio

http://www.necst.co.jp/product/robot/index.html

” by NEC Corporation (left), robotic agent “PaPeRo

http://www.nec.co.jp/products/robot/intro/index.html

” by NEC Corporation (right).

Figure 3.

Experimental Setting.

2.3. Participants

The participants were 20 Japanese university students (14 men and 6 women; 19-23 years old). Before the experiment, I ensured that they did not know about the PaPeRo robot and RoboStudio. They were randomly divided into the following two experimental groups.

  • Screen Group (10 participants): The on-screen agent appeared on a 17-inch flat display (The agent on the screen was about 15 cm tall) and talked to participants. The agent’s voice was played by a loudspeaker placed beside the display.

  • Robotic Group (10 participants): The robotic agent (It was about 40 cm tall) talked to participants.

Both the robotic agent and the computer display (on-screen agent) were placed in front of and to the right of the participants, and the distance between the participants and the agents was approximately 50 cm. The sound pressure of the on-screen and robotic agent’s voice at the participants’ head level was set at 50 dB (FAST, A). These agents’ voices were generated by the TTS (Text-To-Speech) function of RoboStudio. The overview of the experimental setting is depicted in Fig. 3, and pictures showing both experimental groups are shown in Fig. 4.

Figure 4.

Experimental Scene: participants in Screen group (left) and in Robotic group (right).

2.4. Procedure

First, the dummy purpose of the experiment was explained to the participants, and they were asked to play the picross game for about 20 minutes after receiving simple instructions on how to play it. The game was a web-based application, so the participants used the web browser to play it on a laptop PC (Toshiba Dynabook CX1/212CE, 12.1 inch display).

The experimenter then gave the instruction that “This experiment will be conducted by this agent. The agent will give you the starting and ending signal.” After these instructions, the experimenter exited the room, and the agent started talking to the participants, “Hello, my name is PaPeRo! Now, it is time to start the experiment. Please tell me when you are ready.” Then, when the participants replied “ready” or “yes,” the agent said, “Please start playing the picross game,” and the experimental session started. The agent was located as described earlier so that the participants could not look at the agent and the picross game simultaneously.

One minute after starting the experiment, the agent said to them “Umm…I’m getting bored... Would you play Shiritori (see Fig. 5 about the rules of this game) with me?” Shiritori is a Japanese word game where you have to use the last syllable of the word spoken by your opponent for the first syllable of the next word you use. Most Japanese have a lot of experience playing this game, especially when they are children. If the participants acknowledged this invitation, i.e., said “OK” or “Yes,” then the Shiritori game was started. If not, the agent repeated this invitation every minute until the game was terminated (20 minutes). After 20 minutes, the agent said “20 minutes have passed. Please stop playing the game,” and the experiment was finished. Here, Shiritori is an easy word game, so most participants could have played this game while continuing to focus on the picross game.

The agent’s behaviours (announcing the starting and ending signals and playing the last and first game) were remotely controlled by the experimenter in the next room by means of the wizard of oz (WOZ) manner.

Figure 5.

Rules of Shiritori from Wikipedia

http://en.wikipedia.org/wiki/Shiritori

.

Figure 6.

Rate of participants acknowledging or ignoring the agent’s invitation to play Shiritori.

Figure 7.

Duration of participants looking at picross game or agent.

Figure 8.

How many puzzles they succeeded in solving.

2.5. Results

In this experiment, I assumed that the effects of different agents on the participants’ impressions would directly reflect on their behaviours. I then focused on the following behaviours: 1) whether the participants acknowledged the agent’s invitation and actually played the Shiritori game, 2) whether the participants looked at the agent or the picross game during the game, 3) how many puzzles the participants succeeded in solving.

  1. Whether the participants acknowledged the agent’s invitation and actually played the Shiritori game: In the robotic group, eight out of the 10 participants acknowledged the agent’s invitation and actually played the Shiritori game with the agent. However, in the screen group, only four out of the 10 participants did so (Fig. 6). Fisher’s exact test showed a significant tendency between these two experimental groups (p=0.067, p<.1 (+)).

  2. Where the participants looked (agent or picross game): In the robotic group, the participants’ average duration of looking at the robotic agent was 46.3 seconds. In the screen group, the average duration was 40.5 seconds (Fig. 7). These results revealed that most participants in both groups concentrated on looking at the picross game during this 20-minute (1,200 seconds) game. The results of an ANOVA showed no significant differences between the two groups on this issue (F(1,18)=0.17, n.s.).

  3. How many puzzles the participants succeeded in solving: In the robotic group, the participants’ average number of puzzles solved was 2.8, and in the screen group, it was 2.4 (Fig. 8). The results of an ANOVA showed no significant differences between these two groups (F(1,18)=2.4, n.s.).

The results of this experiment are summarized in the following:

  • Screen group: The participants in the screen group showed the same achievement level on the picross game with the robotic group (Fig. 7). They did not look at the on-screen agent much during the experiment (Fig. 8), and they also did not acknowledge the invitation from the on-screen agent (Fig. 6).

  • Robotic group: The participants in the robotic group showed the same achievement level on the picross game with the screen group (Fig. 7). They also did not look at the robotic agent much (Fig. 8). However, most of them acknowledged the robotic agent’s invitation and actually played the Shiritori game (Fig. 6).

Advertisement

3. Summary of experiment 1

The results of this experiment showed that most participants acknowledged the robotic agent’s invitation for the Shiritori game, while many neglected the on-screen agent’s invitation. The participants in both groups showed nearly the same attitudes toward the picross game; that is, they did not look at the agent much but concentrated on the task, and they achieved nearly the same level on the picross game. The participants in the robotic group interacted with the robotic agent (playing the Shiritori game) without neglecting their given tasks (playing the picross game). Therefore, the robotic agent was also appropriate for interacting with users in an everyday interaction style, which is much more similar to the interaction we encounter in our daily lives compared with the style observed in a typical face-to-face interaction setting. Actually, these results are similar to those of former studies that focused on face-to-face interaction; that is, these studies argued that the robotic agents are much more comfortable and believable interactive partners than on-screen agents.

Let me consider why the participants acknowledged the robotic agent’s invitation even though they were not really looking at the robotic agent. Beforehand, I reviewed Kidd and Breazeal’s (2004) investigation. They conducted an experiment comparing a physically present robot with a robot appearing on television as live TV output. The results were that the participants did not show different behaviours and impressions of these different robots. They then concluded that “it is not the presence of the robot that makes a difference, rather it is the fact that the robot is a real, physical thing, as opposed to a fictional animated character on screen, that leads people to respond differently.” Therefore, the participants’ beliefs or mental models about a “robot” based on their expectation or stereotypes (such as “A robot would be nice to talk to”) would affect their attitudes toward an interaction with a robotic agent. More specifically, these beliefs and mental models would lead to making the participants assign certain types of personality or characteristics to this robotic agent and would then cause the participants to acknowledge the robotic agent’s invitation, even though they did not look at the robotic agent much.

Based on the results of this experiment and the discussion in terms of Kidd & Breazeal’s argument mentioned in the above, I would like to investigate the contributing factors that could make users react toward an on-screen agent as if they were reacting toward a robotic agent. Revealing such factors would enable utilizing on-screen agents as interactive partners, especially when robotic agents cannot be used, e.g., in a mobile situation with PDAs or cell phones. If so, I could argue that “on-screen agents are also suitable for interactive agents.”

Specifically, I focused on the following two contributing factors: 1) whether users accepted an invitation from a robotic agent and 2) whether an on-screen agent was assigned an attractive personality or character for the users. The reason I focused on the first factor was that I assumed that participants who accepted an invitation from a robotic agent would also accept one from an on-screen agent that had a similar appearance. And the reason I focused on the second factor was based on the Kidd & Breazeal’s argument (2004) that is about the users’ beliefs and mental model issues. I then assumed that assigning an attractive character for an on-screen agent would cause the users to construct beliefs and mental models about the on-screen agent and would also lead to them behaving as if they were behaving toward the robotic agent.

I then conducted a consecutive experiment to investigate the effects of these two factors on the participants’ behaviours, especially, on whether the participants accepted or ignored the invitation of the on-screen agent.

Advertisement

4. Experiment 2: How to make users react toward an on-screen agent as if they are reacting toward robotic agent?

4.1. Setting

The setting of this Experiment 2 was nearly same with one of the Experiment 1. However, the picross game was projected on the 46-inch LCD not 12-inch LCD like in Experiment 1 due to the participants’ comfort game playing.

4.2. Participants

40 Japanese undergrads participated (20 – 23 years old; 18 men and 22 women). These participants were randomly divided into the following four experimental groups. Actually, this experimental setting was a 2 (whether the users accepted the invitation of the robotic agent; so-called, with/without the robotic agent) x 2 (whether the on-screen agent was assigned an attractive character for the users; so-called, with/without character) factorial design. Note that these participants did not participate in Experiment 1.

  • Group 1 (10 participants): This group was without the robotic agent and without the character setting. An on-screen agent appearing on a 17-inch flat display (the agent on the screen was about 15 cm tall) conducted the experiment for ten minutes.

  • Group 2 (10 participants): This group was also without the robotic agent but had the character setting. The same on-screen agent in Group 1 conducted the experiment. However, just before the experiment, the participants were passed a memo about the on-screen agent. The memo stated that the agent had a very active character, like a child, and really liked talking with people.

  • Group 3 (10 participants): This group had the robotic agent but was without the character setting. A robotic agent (about 40 cm tall) conducted the experiment. After five minutes passed, the robotic agent made error sounds, and the experimenter immediately replaced the robotic agent with an on-screen agent. Then, the on-screen agent conducted the experiment for the remaining five minutes.

  • Group 4 (10 participants): This group had the robotic agent and the character setting. The experimental procedure was the same as that for Group 3. However, just before the experiment, the participants were passed Group 2’s memo about the robotic agent.

4.3. Procedure

First, the dummy purpose of the experiment was explained to the participants, and they were asked to play the picross game for about 10 minutes after receiving simple instructions on how to play it. The game is a web-based application, so the participants used a web browser that was projected on a 46-inch LCD screen.

The experimenter then gave the instruction that “This experiment will be conducted by this agent because the presence of a human experimenter would affect the results. The agent will give you the starting and ending signal.” Actually, the on-screen agent appeared on a 17-inch computer display for Group 1 and 2, while the robotic agent was prepared for Group 3 and 4. After these instructions were given, the participants in Group 2 and 4 were passed a memo that described the agents’ character, and the experimenter exited the room. Then, the agent started talking to the participants, “Hello, my name is PaPeRo! Now, it is time to start the experiment. Please tell me when you are ready.” Then, when the participants replied “ready” or “yes,” the agent said, “Please start playing the picross game,” and the experimental session started. Actually, the agent was located as described earlier (placed in front of and to the left of the participants) so that the participants could not look at the agent and the picross game simultaneously.

One minute after starting the experiment, the agent said to them “Umm…I’m getting bored…Would you play Shiritori with me?” If the participants accepted this invitation, i.e., the participants said “OK” or “Yes,” then the Shiritori game was started. If not, the agent repeated this invitation every two minutes until the game was terminated.

In the cases of Group 3 and 4, after five minutes passed, the robotic agent made error sounds, and it automatically shut down. The experimenter then immediately entered the room and said to the participants “I’m really sorry about this problem… To tell you the truth, this robot has not been working very well in the last few days. I will arrange the experimental setting so that you can continue. Please wait a minute in the next room.” While the participants waited in the next room, the experimenter hid the robotic agent where the participants could not see it, and placed the 17-inch computer display for the on-screen agent in the exact same place with the setting of Group 1 and 2. The participants could not see what the experimenter was doing because the participants were waiting in the next room. Afterward (about two minutes later), the participants were asked to go back to the experimental session, and the experimenter said to them “The emergency situation has been taken care of, so please continue the experiment for the remaining five minutes.” The experimenter did not mention that the robotic agent was changed to an on-screen agent. After the experimenter exited the room, the on-screen agent said “Now, it is time to start the experiment. Please tell me when you are ready,” and the same experimental procedure was restarted. After 10 minutes in all four groups, the on-screen agent said “The experiment is now finished. Please stop playing the game.” Figs 9 and 10 show a picture taken during the actual experimental with the on-screen agent and the robotic agent conducting it. The experimental procedures of the experiment are depicted in Fig. 11. And moreover, the overview of the experimental setting is depicted in Fig. 12.

Figure 9.

Experiment with on-screen agent (Group 1 and 2, and the last five minutes in Group 3 and 4).

Figure 10.

Experiment with robotic agent (the first five minutes in Group 3 and 4).

Figure 11.

Experimental procedure in each group. The participants’ behaviors during the part depicted by the white rectangle were analyzed (i.e., when the on-screen agent was conducting the experiment.).

Figure 12.

Experimental Setting.

4.4. Results

I then focused on the following three types of participant behaviours to investigate the effects of the two contributing factors on how these factors contribute to make users react toward an on-screen agent as if they are reacting toward a robotic agent: 1) whether the participants accepted the on-screen agent’s invitation and actually played the Shiritori game, 2) how much time the participants spent looking at the agent or at the picross game during the experiment, and 3) how many puzzles the participants succeeded in solving. To investigate these behaviours, I focused on the participants’ behaviours when the on-screen agent was conducting the experiment; that is, the full 10 minutes in Group 1 and 2, and the last five minutes in Group 3 and 4.

Figure 13.

Numbers of participants who accepted the invitation of on-screen agent in each group.

1) Whether the participants accepted the on-screen agent’s invitation and actually played the Shiritori game?: First, I investigated how many participants accepted the on-screen agent’s invitation in each experimental group (Fig. 13). In Group 1 (without the robotic agent and without the assigned character), three out of the 10 participants accepted the agent’s invitation and actually played the Shiritori game. In Group 2 (without the robotic agent and with the assigned character), six out of the 10 participants accepted the invitation and played the Shiritori game. In Group 3 (with the robotic agent and without the assigned character), six participants accepted and played, and in Group 4 (with the robotic agent and with the assigned character), eight participants did so. Moreover, in Group 3 and 4, all participants who accepted the invitation of the robotic agent also accepted the invitation of the on-screen agent, while no participants who neglected the invitation of robotic agent accepted the invitation of the on-screen one.

A Fisher’s exact probability test was used to elucidate the effects of the two contributing factors by comparing Group 1 with the other groups. The results of the comparison of Group 1 with Group 2 showed no significant difference between these two groups (one-sided testing: p=.18>.1, n.s.), and the results of the comparison of Group 1 and Group 3 also showed no significant difference between them (one-sided testing: p=.18>.1, n.s.). However, the results of the comparison of Group 1 with Group 4 showed a significant difference (one-sided-testing: p=.03<.05, (*)). The results of the comparison of Group 2 and 3 with Group 4 showed no significant differences (one-sided testing: p=.31>.1, n.s.).

Thus, the results of this analysis clarified that two contributing factors actually had an effect on making the participants react to an on-screen agent as if it were toward a robotic agent (Group 4), while only one of the two factors did not have such effects (Group 2 and 3).

Figure 14.

Duration rates that participants’ looked at on-screen agent.

2) Amount of time the participants spent looking at the agent or at the picross game during the experiment: Next, I investigated the amount of time that the participants looked at the on-screen agent. If the participants were not inclined to behave naturally toward the agent, they would look at the agent a lot. In Group 1, the participants looked at the on-screen agent for about 4.15 seconds during the 10 minute experiment, in group 2 they looked at it for about 7.31 seconds during the 10 minute experiment. In group 3, they looked at it for about 1.39 seconds during the five minute experiment, and in group 4, they looked at it for about 6.04 seconds over five minutes.

To elucidate the statistical differences between these four experimental groups, I utilized a duration rate that was calculated by dividing the amount of time that the participants looked at the on-screen agent by the total duration of the experiment (Fig. 14); that is, the duration rate was 0.69% in Group 1, 1.21% in Group 2, 0.47% in Group 3, and 2.01% in Group 4. Then, a two-way ANOVA (between factors: experimental groups) of the duration rates between four experimental groups was conducted. The results of the ANOVA showed no significant differences in the duration rates between these four groups (F(3,36)=0.57, n.s.).

Therefore, the results of this analysis showed that the two contributing factors did not affect the participants’ behaviours regarding how much time they spent looking at the on-screen agent or at the picross game. This means that these participants focused on playing the picross game.

3) How many puzzles the participants succeeded in solving?: Finally, I investigated how many puzzles the participants succeeded in solving in the picross game. If the participants could not behave naturally toward the agent, they would look at the agent a lot, and their performance on the dummy task would suffer. In Group 1, the participants solved on average 2.6 puzzles during the 10 minute experiment, in group 2, they solved 2.6 puzzles during the 10 minute experiment. In group 3, they solved 1.1 puzzles during the five minute experiment, and in group 4, they solved 1.5 puzzles during five minutes.

To elucidate the statistical differences between these four experimental groups, I utilized the solving rate over five minutes for each experimental group (Fig. 15); that is, the solving rate was 1.3 [puzzles/5 min] in Group 1, 1.3 [puzzles/5 min] in Group 2, 1.1 [puzzles/5 min] in Group 3, and 1.5 [puzzles/5 min] in Group 4. Then, a two-way ANOVA (between factors: experimental groups) for the solving rates between the four experimental groups was conducted. The results of the ANOVA showed no significant differences in the solving rates between these four groups (F(3,36)=0.12, n.s.).

Therefore, the results of this analysis showed that the two contributing factors did not affect the participants’ behaviours regarding the number of puzzles solved. This also means that the participants focused on playing the picross game.

Figure 15.

Numbers of puzzles participants solved.

The results of this experiment can be summarized as follows.

  • Eight participants in Group 4, six participants in Group 2 and 3, and three participants in Group 1 accepted the on-screen’s invitation and played the Shiritori game. In fact, an exact probability test showed a significant difference in the number of participants accepting the invitation between Group 1 and 4 only.

  • No significant difference was evident on the amount of time they spent looking at the on-screen agent or at the picross game between the four groups. The maximum duration rate was 2.01% in Group 4, so most participants in all groups concentrated on looking at the picross game during the experiment regardless of whether they played the Shiritori game or not. No significant differences were evident in the number of solved puzzles between the four groups.

These results indicated that the two contributing factors (whether the users accepted the invitation from the robotic agent and whether the on-screen agent was an attractive character for the users) played a significant role in making the participants react toward the on-screen agent as if they were reacting toward the robotic agent. However, one of these two factors was not enough to have such a role. Moreover, these factors did not have an effect on the participants’ performance or behaviours on the dummy task.

Advertisement

6. Discussion and conclusions

I investigated the contributing factors that could make users react toward an on-screen agent as if they were reacting toward a robotic agent. The contributing factors I focused on were whether the users accepted the invitation of the robotic agent and whether the on-screen agent was assigned an attractive character for the users. The results showed that the participants who first accepted the invitation of a robotic agent that was assigned an attractive character reacted toward the on-screen agent as if they were reacting to the robotic one; that is, these two factors played a significant role in making the participants react toward the on-screen agent as if they were reacting toward the robotic agent.

The results of this experiment demonstrated that one of the two factors was not enough to make the participant accept the invitation; both factors were required for them to accept. This means that the robotic agent is still required to create an intimate interaction with on-screen agents and that assigning an attractive character for an on-screen agent is not enough. To overcome this issue, I am planning a follow-up experiment to investigate the effects of the media size on the participants’ behaviours toward an on-screen agent, e.g., a comparison of an on-screen agent on a 46-inch LCD with an agent on a 5-inch PDA display. The reason is that, in Experiment 2, the on-screen agent’s height was only about 15 cm while the robotic one’s was about 38 cm in this experiment. Goldstein et al. (2002) previously reported that “people are not polite towards small computers,” so I expected that this follow-up experiment would enable us to clarify how to make users react toward on-screen agents as if they were reacting toward robotic agents without utilizing the robotic agent.

Although the results of this experiment suggested that a robotic agent is required to make users behave toward an on-screen agent as if they were reacting toward a robotic agent, introducing such robotic agents into families or into offices is somewhat difficult due to the higher costs and uncertain safety level. Therefore, users should first have the experience of interacting with a robotic agent at special events, such as robot exhibitions, and they should install an on-screen character into their own media terminals, such as mobile phones or normal personal computers, and continue to interact with this on-screen agent in their daily lives. This methodology could be easily applied for various situations where on-screen agents are required to assist users in which robotic agents cannot be used, e.g., in a mobile situation with PDAs, cell phones, or car navigation systems.

Advertisement

Acknowledgments

Experiment 1 was mainly conducted by Ms. Yukari Abe, Future University-Hakodate, Japan, and Experiment 2 was mainly conducted by Ms. Nozomi Kuki, Shinshu University, Japan as their Undergrad theses. The detailed results of each experiment was published in Komatsu & Abe (2008), and Komatsu & Kuki (2009a, 2009b), respectively. This work was supported by KAKENHI 1700118 (Graint-in-Aid for Young Scientist (B)) and Special Coordination Funds for Promoting Science and Technology from the Ministry of Education, Culture, Sports, Science and Technology (MEXT), Japan.

References

  1. 1. Cassell J. Stocky T. Bickmore T. Gao Y. Nakano Y. Ryokai K. Tversky D. Vaucelle C. Vilhjalmsson H. 2002 MACK: Media lab Autonomous Conversational Kiosk, Proceedings of Imagina02.
  2. 2. Goldstein M. Alsio G. Werdenhoff J. 2002 The Media Equation Does Not Always Apply: People are not Polite Towards Small Computers. Personal and Ubiquitous Computing, 6 87 96 .
  3. 3. Gravot F. Haneda A. Okada K. Inaba M. 2006 Cooking for a humanoid robot, a task that needs symbolic and geometric reasoning, Proceedings of the 2006 IEEE International Conference on Robotics and Automation, 462 467 .
  4. 4. Imai M. Ono T. Ishiguro H. 2003 Robovie: Communication Technologies for a Social Robot, International Journal of Artificial Life and Robotics, 6 73 77.
  5. 5. Kidd C. Breazeal C. 2004 Effect of a robot on user perceptions, Proceedings of the 2004 IEEE/RSJ International Conference on Intelligent Robots and Systems, 3559 3564 .
  6. 6. Komatsu T. Abe Y. 2008 Comparison an on-screen agent with a robotic agent in non-face-to-face interactions. Proceedings of the 8th International Conference on Intelligent Virtual Agents (IVA2008), 498 504 .
  7. 7. Komatsu T. Kuki N. 2009a Can Users React Toward an On-screen Agent as if They are Reacting Toward a Robotic Agent?, Proceedings of the 4th ACM/IEEE International Conference on Human-Robot Interaction (HRI2009), 217 218 .
  8. 8. Komatsu T. Kuki N. 2009b Investigating the Contributing Factors to Make Users React Toward an On-screen Agent as if They are Reacting Toward a Robotic Agent, Proceedings of the 18th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN2009), to appear.
  9. 9. Powers A. Kiesler S. Fussel S. Torrey C. 2007 Comparing a computer agent with a humanoid robot, Proceedings of the 2nd ACM/IEEE International Conference on Human-robot Interactions, 145 152 .
  10. 10. Prendinger H. Ishizuka M. 2004 Life-Like Characters, Springer.
  11. 11. Shinozawa K. Naya F. Yamato J. Kogure K. 2004 Differences in effects of robot and screen agent recommendations on human decision-making, International Journal of Human-Computer Studies, 62 267 279 .
  12. 12. Wainer J. Feil-Seifer D. J. Sell D. A. Mataric M. J. 2006 Embodiment and Human-Robot Interaction: A Task-Based Perspective, Proceedings of the 16th IEEE International Conference on Robot & Human Interactive Communication, 872 877 .

Notes

  • http://www.nintendo.co.jp/n02/shvc/bpij/try/index.html
  • http://www.necst.co.jp/product/robot/index.html
  • http://www.nec.co.jp/products/robot/intro/index.html
  • http://en.wikipedia.org/wiki/Shiritori

Written By

Takanori Komatsu

Published: 01 February 2010