Open access

Robot Arm-Child Interactions: A Novel Application Using Bio-Inspired Motion Control

Written By

Tanya N. Beran and Alejandro Ramirez-Serrano

Submitted: 14 October 2010 Published: 09 June 2011

DOI: 10.5772/16704

From the Edited Volume

Robot Arms

Edited by Satoru Goto

Chapter metrics overview

4,385 Chapter Downloads

View Full Metrics

1. Introduction

Robot arms were originally designed in the 1960s for intended use in a wide variety of industrial and automation tasks such as fastening (e.g., welding and riveting), painting, grinding, assembly, palleting and object manipulation). In these tasks humans were not required to directly interact or cooperate with robot arms in any way. Robots, thus, did not require sophisticated means to perceive their environment as they interacted within it. As a result, machine type motions (e.g., fast, abrupt, rigid) were suitable with little consideration made of how these motions affect the environment or the users. The application fields of robot arms are now extended well beyond their traditional industrial use. These fields include physical interactions with humans (e.g., robot toys) and even emotional support (e.g., medical and elderly services).

In this chapter we begin by presenting a novel motion control approach to robotic design that was inspired by studies from the animal world. This approach combines the robot’s manipulability aspects with its motion (e.g., in case of mobile robots such as humanoids or traditional mobile manipulators) to enable robots to physically interact with their users while adapting to changing conditions triggered by the user or the environment. These theoretical developments are then tested in robot-child interaction activities, which is the main focus of this chapter. Specifically, the children’s relationships (e.g., friendship) with a robotic arm are studied. The chapter concludes with speculation about future use and application of robot arms while examining the needs for improved human-robot interactions in a social setting including physical and emotional interaction caused by human and robot motions.

Advertisement

2. Bio-inspired control for robot arms: simple and effective

2.1. Background: human robot interactive control

There are many different fields of human-robot interaction that have been developed within the last decade. The intelligent fusion scheme for human operator command and autonomous planner in a telerobotic system is based on the event based planning introduced in Chuanfan, 1995. This scheme integrates a human operator control command with an action planning and control for autonomous operation. Basically, a human operator passes his/her commands via the telerobotic system to the robot, which, in turn, executes the desired tasks. In many cases both an extender and material handling system are required during the implementation of tasks. To achieve proper control, force sensors have

been used to measure the forces and moments provided by the human operator [e.g., Kim, 1998]. The sensed forces are then interpreted as the desired motion (translational and rotational) while the original compliant motion for the robot remains effective. To improve previous works, video and voice message has been employed, [e.g., Wikita, 1998], for information sharing during the human-robot cooperation. The projection function of the video projector is to project the images of the messages from the robot into an appropriate place. The voice message has the function to share the event information from the robot to the human. Fukuda et al. proposed a human-assisting manipulator teleoperated by electromyography [Fukuda, 2003]. The works described above simplify the many different applications in the field of human-robot interaction. The control mechanism presented herein allows robots to cooperate with humans where humans practically employ no effort during the cooperation task (i.e., minimal effort during command actions). Moreover, in contrast to previous work, where the human-robot cooperation takes place in a well structured engineered environment, the proposed mechanism allows cooperation in outdoor complex/rough terrains.

Human-robot arm manipulator coordination for load sharing

Several researchers have studied the load sharing problem in the dual manipulator coordination paradigm [e.g., Kim, 1991]. Unfortunately, these results cannot be applied in the scope of the human-arm-manipulator coordination. The reason is that in the dual manipulator coordination, the motions of the manipulators are assumed to be known. However, in the human-arm-manipulator coordination, the motion of the object may be unknown to the manipulator. A number of researchers have explored the coordination problem between a human arm and a robot manipulator using compliant motion, predictive control and reflexive motion control [Al-Jarrah, 1997; Al-Jarrah and Zheng, 1997; Iqbal, 1999]. In such scenarios the human-arm, by virtue of its intelligence, is assumed to lead the task while the manipulator is required to comply with the motion of the arm and support the object load. The intelligence of the arm helps perform complex functions such as task planning and obstacle avoidance, while the manipulator only performs the load sharing function. By coordinating the motions of the robotic arm with the user’s arm, the uncertainty due to the environment can be reduced while load sharing can help reduce the physical strain in the human.

Complaint control

The basic ability for a robot to cooperate with a human is to respond to the human’s intentions. Complaint motion control has been used to achieve both load sharing and trajectory tracking where the robot’s motion along a specific direction is called complaint motion. This simple but effective technique can be used to guide the robot as it attempts to eliminate the forces sensed (i.e., precise human-robot interaction). However, diverse problems might occur that require different control approaches.

Predictive control

The problem in the framework of model-based predictive control for human-robot interaction has been addressed in numerous papers [e.g., Iqbal, 1999]. First, the transfer function from the manipulator position command to the wrist’s sensor force output is defined. Then, the desired set point for the manipulator force is set to equal the gravitational force. Numerous results reported in the literature indicate that predictive control allows the manipulator to effectively take over the object load, and the human’s forces (effort) stays close to zero. Moreover, manipulators have been shown to be highly responsive to the human’s movement, and relatively small arm force can effectively initiate the manipulation task. However, difficulties still remain when sudden large forces are exerted to the robot to change the motion of the shared object (load) as the robot arm acts as another automated load to the human.

Reflexive motion control

Al-Jarrah [1997] proposed reflexive motion control for solving the loading problem, and an extended reflexive control was shown to improve the speed of the manipulator in response to the motion of the human. The results show that the controller anticipated the movements of the human and applied the required corrections in advance. Reflexive control, thus, has been shown to assist the robot in comprehending the intentions of the human while they shared a common load. Reflexive motion is an inspiration from biological systems; however, in reflexive motion control it is assumed that the human and the manipulator are both always in contact with an object. That is, there is an object which represents the only communication channel between the robot and the human. This is not always possible. Thus, mechanisms that allow human-robot cooperation without direct contact are needed.

In an attempt to enhance pure human-robot arm cooperation, human-mobile manipulator cooperation applications have been proposed [e.g., Jae, 2002; Yamanaka, 2002; Hirata, 2005; Hirata, 2007]. Here the workspace of the cooperation is increased at the expense of the added complexity introduced by the navigation aspects that need to be considered. Accordingly, humans cooperate with autonomous mobile manipulators through intention recognition [e.g., Fernandez, 2001]. Herein mobile-manipulators refer to ground vehicles with robot arms (Fig. 1a), humanoid robots, and aerial vehicles having grasping devices (Fig. 1b). In contrast to human-robot arm cooperation, here the cooperation problem increases as the mobile manipulator is not only required to comply with the human’s intentions but simultaneously perceives the environment, avoids obstacles, coordinates the motion between the vehicle and the manipulator, and copes with terrain/environment irregularities/uncertainties, all of this while making cooperation decisions, not only between human and robot but also between the mobile-base and robot arm in real-time. This approach has been designated as active cooperation where diverse institutions are running research studies. Some work extends the traditional basic kinematic control schemes to master-slave mechanisms where the master role of the task is assigned to the actor (i.e., human) having better perception capabilities. In this way, the mobile manipulator not only is required to comply with the force exerted by the human while driving the task, but also contributes with its own motion and effort. The robot must respond to the master’s intention to cooperate actively in the task execution. The contribution of this approach is that the recognition process is applied on the frequency spectrum of the force-torque signal measured at the robot’s gripper. Previous works on intention recognition are mostly based on monitoring the human’s motion [Yamada, 1999] and have neglected the selection of the optimal robot motion that would create a true human-robot interaction, reducing robot slavery and promoting human-robot friendship. Thus, robots will be required not only to help and collaborate, but to do so in a friendly and caring way. Accordingly, the following section presents a simple yet effective robot control approach to facilitate human-robot interaction.

Figure 1.

Schematic diagrams of: a) Mobile manipulator, and b) Aerial robot with robotic arm.

2.2. Simple yet effective approach for friendly human-robot interaction

The objective of this section is to briefly present, without a detailed mathematical analysis, a simple yet effective human-robot cooperation control mechanism capable of achieving the following two objectives: i) Cooperation between a human and a robot arm in 3D dimensions, and ii) Cooperation between a human and a mobile-manipulator moving on rough terrain. Here the focus is placed on the former aspect as it is directly related to the experiments discussed in Section 3.

Many solutions have been developed for human-robot interaction; however, current techniques work primarily when cooperation occurs on simple engineered environments, which prevents robots from working in cooperation with humans in real human settings (e.g., playgrounds). Despite the fact that the control methodology presented in this section can be used in a number of mobile manipulators (e.g., ground and aerial) cooperating with humans, herein we focus on the cooperation between a human and a robot arm in 3D dimensions. This application requires a fuzzy logic force velocity feedback control to deal with unknown nonlinear terms that need to be resolved during the cooperation. The fuzzy force logic control and the robot’s manipulability are used and applied to the control algorithm. The goal of using these combined techniques is to ensure that the design of the control system is stable, reliable, and applicable in a wide range of human cooperation areas. Herein, we specially consider those areas and settings where the associated complexities that humans and their environments impose on the system (robot arm) have a significant impact. When interaction occurs, the dynamic coupling between the end-effector (i.e., robot arm) and the environment becomes important. In a motion and force control scenario, interaction affects the controlled variables, introducing error upon which the controller must act. Even though it is usually possible to obtain a reasonably accurate dynamic model of the manipulator, the main difficulties occur from the dynamic coupling with the environment and similarly with the human. The latter is, in general, impossible to model due to time variation. Under such conditions a stable manipulator system could usually be destabilized by the environment/human coupling. Although a number of control approaches of robot interaction have been developed in the last three decades the compliant motion control can be categorized as the one performing well within the above described problems. This is due to the fact that compliant motion control uses indirect and direct force control. The main difference between these two approaches is that the former achieves force control via motion control without an explicit force feedback loop, while the latter can regulate the contact (cooperation) force to a desired value due to the explicit force feedback control loop. The indirect force control includes compliance (or stiffness) and impedance control with the regulation of the relation between position and force (related to the notion of impedance or admittance). The manipulator under impedance control is described by an equivalent mass-spring-damper system with the contact force as input. With the availability of a force sensor, the force signal can be used in the control law to achieve linear and decoupled impedance. Impedance control aims at the realization of a suitable relation between the forces and motion at the point of interaction between the robot and the environment. This relation describes the robot’s velocity as a result of the imposed force(s). The actual motion and force is then a result of the imposed impedance, reference signals, and the environment admittance.

It has been found by a number of researchers that impedance control is superior over explicit force control methods (including hybrid control). However, impedance control pays the price of accurate force tracking, which is better achieved by explicit force control. It has also been shown that some particular formulations of hybrid control appear as special cases of impedance control and, hence, impedance control is perceived as the appropriate method for further investigation related to human-robot arm cooperation. Hybrid motion/force control is suitable if a detailed model of the environment (e.g., geometry) is available. As a result, the hybrid motion/force control has been a widely adopted strategy, which is aimed at explicit position control in the unconstrained task direction and force control in the constrained task direction. However, a number of problems still remain to be resolved due to the explicit force control in relation to the geometry.

Control architecture of human robot arm cooperation

To address the problems found in current human-robot cooperation mechanisms, a new control approach is described herein. The approach uses common known techniques and combines them to maximize their advantages while reducing their deficiencies. Figure 2 shows the proposed human-mobile robot cooperation architecture that is used in its simplified version in human-robot arm cooperation described in Section 3.

In this architecture the human interacting with the robot arm provides the external forces and moments to which the robot must follow. For this, the human and the robot arm are considered as a coupled system carrying a physical or virtual object in cooperation. When a virtual object is considered, virtual forces are used to represent the desired trajectory and velocities that guide the robot in its motion. In this control method the human (or virtual force) is considered as the master while the robot takes the role of the slave. To achieve cooperation, the changes in the force values, which can be measured via a force/torque (F/T) sensor, must be initialized before starting the cooperation. Subsequently, when the cooperation task starts, the measured forces will, in general, be different than the initialized values. As a result, the robot will attempt to reduce such differences to zero. According to the force changes, the robot determines its motion (trajectory and velocity) to compensate the changing in F/T values. Thus, the objective of the control approach is to eliminate (minimize) the human effort in the accomplishment of the task. When virtual forces are used instead of direct human contact with the robot the need to re-compute the virtual forces is eliminated.

Figure 2.

Flow chart of the human-mobile robot cooperation.

Motion decomposition of the end-effector

The manipulability (w) of the robot arm captures the relation between the singular point and the gripper’s end point. Here, the manipulability function of the robot arm (Fig. 2) is used to decompose the end-effector’s desired motion based on the value of w. First the maximum w value of the arm has to be known before it can be used. If the manipulability is small, the end point of the robot’s gripper is close to the singular point of the manipulator. That is, the capability of the robot arm to effectively react to the task while cooperating is reduced. On the other hand, if the value of w (manipulability) is large, the end point of the robot is far from the its singular point and the manipulator will find it easier to perform cooperating actions.

Thus the goal is to maintain the manipulability of the arm (and the mobility of the vehicle if working with a mobile manipulator) as large as possible, thus allowing the arm (and the vehicle when used) to effectively react to the unknown conditions of the environment and the cooperation tasks simultaneously. The fuzzy logic controller in Figure 2 is important in this case as the fuzzy rules can easily be tuned and used to distribute the robot arm’s motion based on the manipulability value and the geometry of the environment (e.g., as the robot arm overcomes obstacles).

Control architecture of human-mobile manipulator cooperation

To finalize this section the cooperation between a human and a mobile manipulator is described for completeness. The motion of a mobile base is subject to holonomic or nonholonomic kinematics constraints, which renders the control of mobile manipulators very challenging, especially when robots work in non-engineered environments. To achieve the cooperation between the human and a mobile manipulator, a set of equations to represent the changes in forces and torques on the robot’s arm caused by the interaction of the mobile manipulator on rough terrains is required. These equations can take different forms depending on the type or robot systems used (e.g., sensors). However, all forces and torques should be a function of the roll, pitch, and yaw angles of the vehicle as it moves. These formulations will indicate what portion of the actual sensed force must be considered for effective cooperation (i.e., human intention) and which portion is to be neglected (i.e., reaction forces due to the terrain or the disturbances encountered by the robot).

The control system of the manipulator for human-robot cooperation/interaction was designed considering the operational force by the human (operator) and the contact force between the manipulator and the mobile robot. The interacting force can be measured by a F/T sensor which can be located between the final link of the manipulator and the end-effector (i.e., the wrist of the manipulator). The human force and the operational force applied by the human operator denote the desired force for the end-effector to move while compensating the changing in the forces. The final motion of the manipulator is determined by the desired motion by the human force controller. To allow the arm to be more reactive to unknown changes (due to the human and the environment) the manipulability of the arm must be continuously computed. As the arm approaches the limits of its working environment the motion of the mobile manipulator relies more on the mobile base rather than the arm. In this way, the arm is able to reposition itself in a state where it is able to move reactively. In the experiments used in the next section the mobile base was removed. This facilitated the tests while simultaneously enhancing the cooperation.

The above control mechanism (Fig. 2) not only enhances human-robot cooperation but also enhances their interaction. This is due to the fact that the robot reacts not only to the human but also to the environmental conditions. This control mechanism was implemented in the studies presented in the following section.

Advertisement

3. Children’s relationships with robots

We designed a series of experiments to explore children’s cognitive, affective, and behavioral responses towards a robot arm under a controlled task. The robot is controlled using a virtual force representing a hypothetical human-robot interaction set a priori. The goal of using such control architecture was to enable the robot to appear dexterous, flexible while operating with smooth, yet firm biological type motions. The objective was to enhance and facilitate the human-robot cooperation/interaction with children.

3.1. Series of experiments

Experimental setup

A robot arm was presented as an exhibit in a large city Science Centre. This exhibit was used in all the experimental studies. The exhibit was enclosed with a curtain within a 20 by 7 foot space (including the computer area). A robot arm was situated on a platform with a chair placed.56 meters from its 3D workspace to ensure safety. Behind a short wall of the robot arm was one laptop used to run the commands to the robotic arm and a second laptop connected to a camera positioned towards the child to conduct observations of children’s helping and general behaviors.

All three studies employed a common method. A researcher randomly selected visitors to invite them to an exhibit. The study was explained, and consent was obtained. Each child was accompanied behind a curtain where the robot arm was set up, with parents waiting nearby. Upon entering an enclosed space, the child was seated in front of a robot arm. Once the researcher left, the child then observed the robot arm conduct a block stacking task (using the bio-inspired motion control mechanisms described in Section 2). After stacking five blocks, it dropped the last block, as programmed.

Design and characteristics of the employed robot arm

The robot arm used in these experiments was a small industrial electric robot arm having 5 degrees of freedom where pre-programmed bio-inspired control mechanisms were implemented. To aesthetically enhance the bio-inspired motions of the robot the arm was “dressed” in wood, corrugated cardboard, craft foam, and metal to hide its wires and metal casing. It was given silver buttons for eyes, wooden cut-outs for ears, and the gripper served as the mouth. The face was mounted at the end of the arm, creating an appearance of the arm as the neck. Gender neutral colors (yellow, black, and white) were given to a non-specific gender. Overall, it was decorated to appear pleasant, without creating a likeness of an animal, person, or any familiar character yet having smooth natural type motions.

In addition to these physical characteristics, its behaviour was friendly and familiar to children. That is, it was programmed to pick up and stack small wooden blocks. Most children own and have played with blocks, and have created towers just as the robot arm did. This familiarity may have made the robot arm appear endearing and friendly to the children.

The third aspect of the scenario that was appealing to the children was that it was programmed to exhibit several social behaviours. Its face was in line with the child’s face to give the appearance that it was looking at the child. Also, as it picked up each block with its grip (decorated as the mouth), it raised its head to appear to be looking at the child before it positioned the block in the stack. Such movement was executed by the robot by following a virtual pulling force simulating how a human would guide another person when collaborating in moving objects. Then, as it lifted the third block, the mouth opened slightly to drop the block and then opened wider as if to express surprise at dropping it. It then looked at the child, and then turned towards the platform. In a sweeping motion it looked back and forth across the surface to find the block. After several seconds it then looked up at the child again, as if to ask for help and express the inability to find the block.

Figure 3.

Five degree of freedom robot arm on platform with blocks.

Measures

The child’s reactions to the robot arm were observed and recorded. Then the researcher returned to the child to conduct a semi-structured interview regarding perceptions of the robot arm. In total, 60 to 184 boys and girls between the ages of 5 to 16 years (M = 8.18 years) participated in each study. We administered 15 open-ended questions. Three questions asked for general feedback about the arm’s appearance, six questions referred to the robot’s animistic characteristics, and six questions asked about friendship. These data formed the basis of three separate areas of study. First, we explored whether children would offer assistance to a robot arm in a block stacking task. Second, we examined children’s perceptions of whether the arm was capable of various thoughts, feelings, and behaviours. Finally, the children’s impressions about friendship with the robot arm were investigated.

3.2. Background

Only a generation ago, children spent much of their leisure time playing outdoors. These days, one of the favourite leisure activities for children is using some form of advanced technological device (York, Vandercook, & Stave, 1990). Indeed, children spend 2-4 hours each day engaged in these forms of play (Media Awareness Network, 2005). Robotics is a rapidly advancing field of technology that will likely result in mass production of robots to become as popular as the devices children today enjoy. With robotic toys such as Sony’s AIBO on the market, and robots being developed with more advanced and sensitive responding capabilities, it is crucial to ask how children regard these devices. Would children act towards robots in a similar way as with humans? Would children prefer to play with a robot than with another child? Would they develop a bond with a robot? Would they think it was alive? Given that humans are likely to become more reliant upon robots in many aspects of daily life such as manufacturing, health care, and leisure, we must explore their psycho-social impact. The remainder of this chapter takes a glimpse on this potential impact on children by determining their reactions to a robot arm. Specifically, this section will explain whether children would offer assistance to a robot, perceive a robot as having humanistic qualities, and would consider having a robot as a friend.

Study 1: Assistance to a Robot Arm

Helping, or prosocial behaviours are actions intended to help or benefit another individual or group of individuals (Eisenberg & Mussen, 1989; Penner, Dovidio, Pilavin, & Schroeder, 2005). With no previous research to guide us, we tested several conditions in which we believed children would offer assistance (see Beran et al. 2011). The one reported here elicited the most helping behaviors.

Upon sitting in front of the robot arm the researcher stated the following:

  • Are you enjoying the science centre? What’s your favorite part?

  • This is my robot (researcher touches platform near robot arm). What do you think?

  • My robot stacks blocks (researcher runs fingers along blocks).

  • I’ll be right back.

The researcher then exited and observed the child’s behaviors on the laptop. A similar number of children, who did not hear this introduction, formed the comparison group. As soon as children in each group were alone with the robot arm, it began stacking blocks. A significantly larger number of children in the introduction group (n = 17, 53.1%), than in the comparison group (n = 9, 28.1%), helped the robot stack the blocks, X2(1) = 4.15, p = 0.04. Thus, children are more likely to offer assistance for a robot when they hear a friendly introduction than when they receive no introduction. We interpret these results to suggest that the adult’s positive statements about the robot modeled to the child positive rapport regarding the robot arm, which may have created an expectation for the child to have a positive exchange with it. Having access to no other information about the robot, children may have relied on this cue to gauge how to act and feel in this novel experience. Interestingly, at the end of the experiment, the researcher noted anecdotally that many children were excited to share their experience with their parents, asked the parents to visit the robot, and explained that they felt proud to have helped the robot stack blocks. Other children told their parents that they did not help the robot because they believed that it was capable of finding the block itself. Overall, we speculate that the adult’s display of positive regard towards the robot impacted children’s offers of assistance towards it.

Study 2: Animistic impressions of a Robot Arm

Animism as a typical developmental stage in children has been studied for over 50 years, pioneered by Piaget (1930; 1951). It refers to the belief that inanimate objects are living. This belief, according to Piaget, occurs in children up to about 12 years of age. The disappearance of this belief system by this age has been supported by some studies (Bullock, 1985; Inagaki and Sugiyama, 1988) but not others (Golinkoff et al., 1984; Gelman and Gottfried, 1983). Nevertheless, the study of animism is relevant in exploring how children perceive an autonomous robot arm.

Animism can be divided and studied within several domains. These may include cognitive (thoughts), affective (feelings), and behavioural (actions) beliefs, known as schemata. In other words, people possess schemata, or awareness, that human beings have abilities for thinking, feeling, and acting. More specifically, thinking abilities may include memory and knowledge; feeling abilities include pleasant and unpleasant emotions; and behaviour abilities can refer to physical abilities and actions. Melson et al. (2009) provide some initial insights into several of these types of beliefs children hold towards a robotic pet (Sony’s AIBO). Also, Melson et al. (2005) found that many children believed that such a robot was capable of the feelings of embarrassment and happiness, as well as recognition. Additional evidence of animism towards a robot was obtained by Bumby and Dautenhahn (1999) who reported that children may include human characteristics about robots in stories they create.

The most recent study on animism presents surprising insights about animism. A team of researchers from the University of Washington's Institute for Learning and Brain Sciences [I-LABS, 2010] found that, “babies can be tricked into believing robots are sentient”. The researchers used a remote-controlled robot in a skit to act in a friendly manner towards its human (i.e., adult) counterpart. When the baby was left alone with the robot, in 13 out of 16 cases the baby followed the robot's gaze, leaving researchers to conclude that the baby believed it was sentient. We extend these insightful findings of animism to children’s cognitive, affective, and behavioural beliefs about a robot arm in the present study.

Responses to questions about the arm’s appearance and animistic qualities were coded for this study. Two raters were used to determine the reliability of the coding, with Cohen’s Kappa values ranging from 0.87 to 0.98, with a mean of 0.96 indicating very good inter-rater agreement. The majority of children identified the robot as male, and less than a quarter of the children identified it as female. One child stated the robot was neither, and about 10% did not know. The child’s sex was not related to their response. About a third of the children assigned human names to the robot such as ‘Charlie’. About a third gave names that refer to machines, such as ‘The Block Stacker’. A pet name was rarely assigned, such as ‘Spud’, or a combined human-machine type name ‘Mr. Robot’. When asked about their general impressions of the robot, a large majority gave a positive description, such as cool/awesome, good/neat, nice, likeable, interesting, smart, realistic, super, fascinating, and funny. Two children reported that the robot had a frightening appearance, and three children thought it looked like a dog. Another 17 did not provide a valid response.

Regarding its cognitive characteristics, more than half of the children stated the robot had recognition memory due to the ability to see their face, hair, and clothes; and that the robot was smart and had a brain (see Table 1). Other children provided a mechanical reason by stating it had a memory chip, camera, or sensors, or may have been programmed. Over a third of the children stated the robot could not remember them, for various reasons shown in the table. Children’s perceptions about the robot’s cognitive abilities in regards to knowledge are also shown in Table 1. About half of the children thought the robot did not have this capability, due to reasons such as not having a brain or interactions with them. Almost a third indicated that they believed the robot does know their feelings for various reasons such as from seeing the child and being programmed with this ability.

Regarding affective characteristics, the majority of children thought that the robot liked them, as shown in Table 2. A few children believed that the robot did not like them. Similarly, the majority of children reported that they thought the robot would feel left out if they played with a friend. Over a quarter of the children stated the robot would not feel left out, but provided explanations that would seemingly protect the robot from harm.

Robot can remember youRobot knows your feelings
Yes97 (52.7%)Yes54 (29.3%)
Can see me37Can see me18
Has memory chip, sensors15Has memory chip, sensors5
Smart, has brain3Smart, has brain3
If has a brain6Do not know why17
If short duration5Not coded11
If programmed1
Do not know why24
Not coded6
No68 (37.0%)No103 (56.0%)
No brain, eyes, or memory30No brain, eyes, or memory37
Too many people to remember14No interaction with me19
Robot does not like me3If not programmed8
If no brain3Do not know why31
If long duration2Not coded8
If not programmed2
Do not know why11
Not coded3
Do not know19 (10.3%)Do not know27 (14.7%)

Table 1.

Number and percentage of children reporting cognitive features of robot (N = 184).

Robot likes youRobot feels left out
Yes118 (64.0%)Yes127 (69.0%)
Looks/smiles at me, friendly38No one to play with62
I was nice/did something nice20Hurt feelings36
Did not hurt me13I would include robot9
It had positive intentions9Not fair2
Do not know why33Do not know why11
Not coded5Not coded7
No16 (8.7%)No53 (28.8%)
Ignored me/didn’t let me help10No thoughts/feelings29
No thoughts/feelings4Would include robot16
Do not know why2Does not understand3
Not coded0Do not know why5
Not coded0
Do not know50 (27.3%)Do not know4 (2.2%)

Table 2.

Number and percentage of children reporting affective features of robot (N = 184).

In regards to its behavioral characteristics (Table 3), more than a third of the children stated the robot was able to see the blocks, with just over half of the children indicating that the robot could not see the blocks. A higher endorsement of the robot’s ability to show action is evident in the table. That is, a large majority stated the robot could play with them, and even provided a variety of ideas for play. Examples include block building, and Lego®, catch with a ball, running games, and puzzles.

Robot sees blocksRobot plays with you*
Yes77 (41.8%)Yes154 (83.7%)
Has eyes32Construction103
Stacking20Ball game26
Sensors, camera13Running game12
Trained5Board game12
Other0Other17
Do not know why7Do not know why5
Not coded0Not coded5
No94 (51.1%)No25 (13.6%)
Eyes not real49Physical limitation11
Sensors, camera19Other4
Missed a block19Do not know why6
Guessed1Not coded4
Do not know why5
Not coded1
Do not know13 (7.1%)Do not know5 (2.7%)

Table 3.

Note* Many children provided more than one response.Number and percentage of children reporting behavioral features of robot (N = 184).

To further determine whether children considered the robot to be animate or inanimate, we analyzed the pronouns children used when talking about the robot arm. Almost a quarter of the children used the pronoun “it” in reference to the robot, another quarter stated “he”, and half used both.

In summary, children seemed to adopt many animistic beliefs about the robot. Half thought that it would remember them, and almost a third thought it knew how they were feeling. Affective characteristics were highly endorsed. More than half thought that the robot liked them and that it would feel rejected if not played with. In their behavioral descriptions, more than a third thought it could see the blocks, and more than half thought the robot could play with them. It is evident that children assigned many animistic abilities to the robot, but were more likely to ascribe affective than cognitive or behavioral ones. There was additional evidence of human qualities according to the names children gave it, their descriptions of it, and the pronouns they used to reference it in their responses. These animistic responses, moreover, were more apparent in younger than older children.

Although some responses suggest that children believed the robot held human characteristics because of programming and machine design, the majority of statements referred to human anatomy (e.g., eyes, facial features, and brain), emotions, and intentions. We explain these findings in two ways. First, the robot arm presented many social cues. That is, the eyes were at the same eye level as the children’s, giving the impression of ‘looking’ at child, and it returned to this position many times while scanning for the block. Children may have interpreted this movement as expression of interest and closeness, which is one of the reactions to frequent eye contact among people (Kleinke, 1986; Marsh, 1988). Second, children may have projected their own feelings, thoughts, and experiences onto the robot arm, which Turkle (1995) has reported may occur with robots. This was particularly evident in the surprising finding that so many children believed that the robot would feel rejected and lonely if not included in play, as well as that the arm could engage in forms of play that clearly it could not (e.g., running). Third, children may have lacked knowledge of terms and principles to explain the robot’s actions, thereby relying on terms that express human qualities such as ‘remembering’, ‘knowing’, and ‘liking’. Fourth, because the arm moved autonomously, children may have developed the impression that it has intentions and goals, as is a typical reaction to any independently moving object (Gelman, 1990; Gelman and Gottfried, 1996; Poulin-Dubois and Shultz, 1990).

Study 3: Children’s impressions of friendship towards a Robot Arm

Friendships are undoubtedly important for childhood development, and, as such, set the stage for the development of communication skills, emotional regulation, and emotional understanding (Salkind, 2008). In this study, and given the animistic responses obtained in the previous study, we set out to determine the extent to which children would hold a sense of positive affiliation, social support, shared activities, and communication towards a robot; all of which exemplify friendship. In addition, we questioned whether children would share a secret

Robot can cheer you upRobot can be your friend
Yes145 (78.8%)Yes158 (85.9%)
Perform action for me61Conditional31
Perform action with me12Being or doing things together30
Cheerful appearance20Helpful17
Connects with me20Knows me12
Help me7Kind11
Do not know why17Friendly6
Not coded8Likeable7
No27 (14.7%)Friend to robot4
Limited abilities16Do not know why28
Does not like me1Not coded12
No19 (10.3%)
Limited mobility3
Limited communication2
No familiarity3
No brain, feelings4
Do not know why8Do not know why4
Not coded2Not coded3
Do not know12 (6.5%)Do not know7 (3.8%)

Table 4.

Number and percentage of children reporting positive affiliation (N = 184).

with a robot, as this behavior may also signify friendship (Finkenauer, Engels & Meeus, 2002).

As shown in Table 4, more than three quarters of the children stated that the robot could improve their mood, with reasons varying from its actions to its appearance. Moreover, more than three quarters stated the robot could be their friend. Many reasons were given for this possibility. They included enjoying activities together, helping each other, kindness, likeability, and shared understanding.

According to Table 5, the majority of children stated they would talk to the robot and share secrets with it. Most children had difficulty explaining their reasons for their answers. Rather, they provided answers that described what they would talk about, such as what to play together. Interestingly, many children stated that they liked the robot and wanted to spend time becoming acquainted. This desire for a greater connection to the robot is also exemplified in their responses to sharing secrets. More than a third of the children stated they would tell the robot a secret. Some children (n = 24) stated that they thought it was wrong to tell secrets, suggesting that of those children who would generally tell secrets (n = 160), half of them (n = 84, 52.50%) would tell a robot. The most frequent reason given was because they believed that the robot would not share it – seemingly because the robot arm could not speak. Many of them also stated, however, that they considered the robot arm to be friendly.

Talk to robotTell robot secrets
Yes 124 (67.4%)Yes84 (45.7%)
I like the robot16 Robot will keep secret30
To get to know each other6 Friendship with robot13
Robot has mouth6 Positive response to secret7
If robot could talk22 Other4
Gave examples30
Do not know why37 Do not know why22
Not coded7 Not coded8
No* 53 (28.8%)No92 (50.0%)
Robot cannot talk20 Secrets are wrong24
Robot cannot hear6 Robot has limitations18
Not human5 Robot not trustworthy24
Looks unfriendly9 Robot is not alive9
Do not know why11 Do not know why12
Not coded4 Not coded5
Do not know7 (3.8%)Do not know8 (4.3%)

Table 5.

Some children provided more than one reasonNumber and percentage of children reporting communication (N = 184).

The majority of children responded affirmatively to questions about affiliation, receiving support, communicating, and sharing secrets, which typically characterize friendship. Regarding affiliation, almost two thirds of the children thought the robot liked them, and many explained that it was because the robot appeared friendly. Children also attributed positive intentions to the robot, likely because it was moving independently and engaging in a child friendly task. More than three quarters of the children did believe that the robot could offer them support. The action of stacking blocks was often explained as a means of providing this support, perhaps to distract and entertain the child. A large majority of children stated that they would play with the robot in a variety of games. It is not surprising that many of them suggested building with blocks, considering that they had just observed this activity. Finally, about two thirds of the children stated they would talk to the robot and more than a third stated that they would share secrets. Again, these results suggest that children are willing to develop a bond with the robot.

Many children in our study stated that they would not engage in these friendship-behaviors with a robot and explained that the robot did not have the capabilities to do so. Reasons for these different perceptions of the robot have not been explored in the research but may plausibly include variation in children’s knowledge of the mechanics of robots. In addition, a considerable proportion of children did not or could not provide an answer to the questions about friendship. It is possible that children were unable to differentiate human from robot characteristics, lacked sufficient understanding about the mechanics of robots, or were generally confused about the robot’s abilities. Our use of terms in the interview, such as whether the robot would ‘feel left out’, describe human characteristics and may have mislead children into positively responding. Clearly, the results raise many questions for research, not the least of which is whether children actually do develop a friendship with a robot. Over time, and as a result of interactions with robots, children may develop a new system or schema of understanding, and subsequent vocabulary to articulate their sense of friendship with a robot, that is likely distinct from their friendships with children.

Advertisement

4. Implications for robot design

The fact that so many children ascribed life characteristics to the robot suggests that they have high expectations of them and are willing to invite them into their world. This presents a challenge to robot designers to match these expectations, if the purpose of the robot is to garner and maintain interest from children. Children may be primed for these interactions. In fact, children may become frustrated when a robot does not respond to their initiations and may actually persevere at eliciting a response (Weiss et al., 2009). Therefore, the robot may not need to be programmed to respond in an identical fashion to a specific initiation, as humans certainly do not, which may actually increase the child’s engagement with a robot. This principle is well known as variable ratio reinforcement according to behaviourism learning theory (Skinner, 1969). Of course, children may become discouraged if the robot’s response is erratic. Instead, we propose that a high, but not perfectly predictable response to the child’s behaviours will lead to the longest and most interesting interactions.

In addition, our studies suggest that children can develop a collaborative relationship with a robot when playing a game together. This gives some suggestion of the nature of the relationship children may enjoy with a robot: one that allows give and take (Xin & Sharlin, 2007). This may enhance a child’s sense of altruism and, hence, increase engagement with it. It is, thus, recommended that developers of such robots consider designing them to not only offer help, but be able to receive it.

4.1. Limitations

The studies and tests reported in this chapter have certain limitations that one must consider when interpreting the results. The tests and associated observations made during the study can be reproduced by using a variety of robot arms and even include mobile manipulators from where more detailed children-robot interaction studies can be made. Although the bio-inspired control mechanism used in this study worked well, tests using such control approaches should be performed on other robot types including humanoids and mobile robots. Such control architecture should also be tested in physical children-robot interaction to determine its suitability towards enabling seamless active engagement between children (humans) and robots.

Robot arms have indeed changed from their original industrial and automotive applications in the 1960s. Our studies show that children are ready to accept them as social objects for sharing personal information, offering mutual support and assistance, and regarding them as human in various ways. In the near future, we expect that humans will not only frequently and directly interact with and rely on robot arms and robots of diverse types for daily activities, but perhaps treat them and regard them as possibly human. Our studies cannot begin to address the numerous complex questions about the nature of the interactions people will have with robots. We offer a glimpse, however, of children’s willingness to do so. Overall, the results are rather surprising given that the robot arm did not speak, performed only one task, and did not initiate physical interaction with the child. Are children merely responding to the robot arm as if it is a fancy puppet, and they are presenting their imagination in their responses? Perhaps, but regardless of the explanation, children in these studies demonstrated overwhelmingly their predisposition towards active engagement for bio-inspired motion control.

Advertisement

Acknowledgments

We give special thanks to the TELUS World of Science - Calgary for collaborating with us. This research would not have been possible without their support.

References

  1. 1. AbelsonR. P. 1981 Psychological status of a script concept. American Psychologist 36 715729 .
  2. 2. Al-JarrahO. M.ZhengY. F. Arm-manipulator coordination for load sharing using reflexive motion control, 1997 Proceedings IEEE International Conference on Robotics and Automation.
  3. 3. Al-JarrahO. M.ZhengY. F. Arm-manipulator coordination for load sharing using variable compliance control, 1997 Proceedings IEEE Int. Conf. on Robotics and Automation.
  4. 4. BaronR.EarhardB.OzierM. 1995 In: Psychology: Canadian Edition. Allyn and Bacon, Scarborough.
  5. 5. BatkiA.Baron-CohenS.WheelwrightS.ConnellanJ.AhluwaliaJ. 2000 Is there an innate gaze module? Evidence from human neonates. Infant Behavior and Development 23 223229 .
  6. 6. BeranT. N.Ramirez-SerranoA.KuzykR.NugentS.FiorM. 2011 Would children help a robot in need? International Journal of Social Robotics, 3(1), 83-92.
  7. 7. BNET., 2008 Robot ideas to make life easier. Retrieved from http://findarticles.com/p/articles/mi_qn4176/is_20080928/ai_n28115686/
  8. 8. BullockM. 1985 Animism in childhood thinking: A new look at an old question. Developmental Psychology 21 217225 .
  9. 9. BumbyK. E.DautenhahnK. 1999 Investigating children’s attitudes towards robots: A case study. In: Proceedings of the Third Cognitive Technology Conference. San Francisco, CA.
  10. 10. ChuanfanG.Tzyh-JongT.NingX.BejczyA. K. Fusion of human and machine intelligence for telerobotic systems, 1995 Proceedings of the IEEE Int. Conf. on Robotics and Automation.
  11. 11. DeutschJ. M. 1943 The development of children’s concepts of causal relations. In Barker, R., Kounin, J., Wright, H. (Eds.), Child behavior and development. McGraw-Hill, New York, 129145 .
  12. 12. EisenbergN.MussenP. H. 1989 The roots of prosocial behavior in children. Cambridge University Press, Cambridge
  13. 13. FarroniT.CsibraG.SimionF.JohnsonM. H. 2002 Eye contact detection in humans from birth. In: Proceedings of the National Academy of Sciences 99, 96029605 .
  14. 14. FernandezV.BalaguerC.BlancoD.SalichsM. A. Active human-mobile manipulator cooperation through intention recognition, 2001 IEEE Int. Conf. on Robotics and Automation.
  15. 15. FukudaO.TsujiT.KanekoM.OtsukaA.human-assistingA.manipulatorteleoperated.byE. M. G.signalsarmmotions. 2003 IEEE Transactions on Robotics and Automation, 19(2): 210222 .
  16. 16. GelmanR.SpelkeE. S.MeckE. 1983 What preschools know about animate and inanimate objects. In Rogers, D., Sloboda, J. (Eds.), The acquisition of symbolic skills. Plenum, New York, 297327 .
  17. 17. GelmanR. 1990 First principles organize attention to and learning about relevant data: Number and the animate-inanimate distinction. Cognitive Science 14, 79106 .
  18. 18. GelmanS. A.GottfriedG. M. 1996 Children’s causal explanation of animate and inanimate motion. Child Development 67 19701987 .
  19. 19. GolinkoffR. M.HardingC.CarlsonV.SextonM. E. 1984 The infant’s perception of causal events: The distinction between animate and inaminate objects. In Lipsitt, L.P., Rovee-Collier, C. (Eds.), Advances in infancy research. 3 Ablex, Norwood, 145151 .
  20. 20. HeerinkM.KröseB. J. A.EversV.WielingaB. 2008 The influence of perceived adaptiveness of a social agent on acceptance by elderly users. In: Proceedings of the 6th International Conference of the International Society for Gerontechnology, 5761 .
  21. 21. Hirata, Y., Kume, Y., Wang, Z.-D., and Kosuge, K., Handling of a Single Object by Multiple Mobile Manipulators in Cooperation with Human Based on Virtual 3-D Caster Dynamics. JSME Int. J. Series C Mechanical Systems, Machine Elements and Manufacturing, 2005 48(4): 613619 .
  22. 22. HirataY.MatsudaY.KosugeK. Handl3g of an object in 3 -D space by multiple mobile manipulators based on intentional force/moment applied by human, 2007, IEEE/ASME International Conference on Advanced intelligent Mechatronics.
  23. 23. HollandV. M.RohrmanN. L. 1979 Distribution of the feature [+animate] in the lexicon of the child. Journal of Psycholinguistic Research 8 367378 .
  24. 24. HuangI.LeeH. W. 1945 Experimental analysis of child animism. Journal of Genetic Psychology 66 6974 .
  25. 25. HymelS.VaillancourtT.Mc DougallP.RenshawP. D. 2002 Peer acceptance and rejection in childhood. In Smith, P.K., Hart, C.H. (Eds.), Blackwell handbook of childhood social development. Blackwell Publishers, Oxford, 265284 .
  26. 26. IqbalK.ZhengY. F. Arm-manipulator coordination for load sharing using predictive control, 1999 Proceedings IEEE International Conference on Robotics and Automation.
  27. 27. InagakiK.SugiyamaK. 1988 Attributing human characteristics: Developmental changes in over- and underattribution. Cognitive Development 3 5570 .
  28. 28. I-LABS, 2010 Institute for Learning & Brain Sciences http://ilabs.washington.edu/research/research_themes.html
  29. 29. ItakuraS.IshidaH.KandaT.ShimadaY.IshiguroH.LeeK. 2008 How to build an intentional android: Infants’ imitation of a robot’s goal-directed actions. Infancy 13 519532 .
  30. 30. Jae, H.C., Interactive force control of an operator-mobile manipulator coordination syste, 2002 Journal of Robotic Systems,. 19(4): 189198 .
  31. 31. JipsonJ.GelmanS. 2007 Robots and rodents: Children’s inferences about living and nonliving kinds. Child Development 78 16751688 .
  32. 32. KahnP. H.Jr FriedmanB.Perez-GranadosD. R.FreierN. G. 2006 Robotic pets in the lives of preschool children. Interaction Studies: Social Behavior and Communication in Biological and Artificial Systems 7 405436 .
  33. 33. KeilF. 1979 In: Semantic and conceptual development: An ontological perspective. Harvard University Press, Cambridge.
  34. 34. KimK. I.ZhengY. F. Unknown load distribution of two industrial robots, 1991 Proceedings IEEE International Conference on Robotics and Automation.
  35. 35. KimK. I.ZhengY. F. Human-robot coordination with rotational motion, 1998 Proceedings Robotics and Automation IEEE International Conference.
  36. 36. KleinkeC. L. 1986 Gaze and eye contact: A research review. Psychological Bulletin 100 78100 .
  37. 37. LiuC.ConnK.SarkarN.StoneW. 2008 Online affect detection and robot behavior adaptation for intervention of children with autism. IEEE Transactions on Robotics 24 883896 .
  38. 38. Loring-MeierS.HalpernD. F. 1999 Sex differences in visuospatial working memory: Components of cognitive processing. Psychonomic Bulletin and Review 6 464471 .
  39. 39. MarettR. R. 1914 In: The threshold of religion. Methuen and Co, London.
  40. 40. MarshP. E. 1988 In: Eye to eye: How people interact. Salem House, Topsfield.
  41. 41. MelsonG. F.KahnP. H.BeckA. M.FriedmanB.RobertsT.Garrett 2005 2005. Robots as dogs? Children’s interactions with the robotic dog AIBO and a live Australian shepherd. CHI, ACM Press, 16491652 .
  42. 42. MelsonG. F.Kahn JrP. H.BeckA.FriedmanB.RobertsT.GarrettE.GillB. T. 2009 Children’s behavior toward and understanding of robotic and living dogs. Journal of Applied Developmental Psychology 30 92102 .
  43. 43. MichotteA. 1963 In: The perception of causality. Basic Books, New York.
  44. 44. MuranJ. C. 1991 A reformulation of the ABC model in cognitive psychotherapies: Implications for assessment and treatment. Clinical Psychology Review 11 399418 .
  45. 45. NyborgH. 1983 Spatial ability in men and women: Review and new theory. Advances in Behavior Research and Therapy 5 89140 .
  46. 46. OakesM. E. 1947 In: Children’s explanations of natural phenomena. Bureau of Publications Teachers College, New York.
  47. 47. ParkerJ. G.AsherS. R. 1987 Peer relations and later personal adjustment: Are low-accepted children at risk? Psychological Bulletin 102 357389 .
  48. 48. ParkerJ. G.RubinK. H.PriceJ. M.De RosierM. E. 1995 Peer relationships, child development, and adjustment: A developmental psychopathology perspective. In Cicchetti, D., Cohen, D.J. (Eds.), Developmental psychopathology: Risk, disorder, and adaptation. 2 Wiley, New York, 96161 .
  49. 49. PellegriniA. D.SmithP. K. 1998 The development of play during childhood: Forms and possible functions. Child Psychology and Psychiatry Review 3 5157 .
  50. 50. Penner LA, Dovidio JF, Pilavin JA, Schroeder DA 2005 Prosocial behavior: multilevel perspectives. Annu Rev Psychol 56 365392
  51. 51. PiagetJ. 1929 In: The child’s conception of the world. Harcourt Brace, New York.
  52. 52. PiagetJ. 1930 In: The child’s conception of physical causality. Routledge and Kegan Paul, London.
  53. 53. PiagetJ. 1951 In: The child’s conception of the world. Routledge and Kegan Paul, London.
  54. 54. PitschK.Koch 2010 2010. How infants perceive the toy robot Pleo. An exploratory case study on infant-robot-interaction, Second Int. Symp. on New Frontiers in Human-Robot-Interaction, 8087 .
  55. 55. PolanyiM. 1968 Logic and psychology. American Psychologist 23 2743 .
  56. 56. Poulin-DuboisD.SchultzT. F. 1990 The infant’s concept of agency: The distinction between social and non-social objects. Journal of Genetic Psychology 151, 7790 .
  57. 57. RichardsL. 2005 Handling qualitative data: A practical guide. Thousand Oaks: Sage.
  58. 58. SaarniC. 1999 In: The development of emotional competence. Guilford, New York.
  59. 59. SafranJ. D. 1990 Towards a refinement of cognitive therapy in light of interpersonal therapy: I. Theory. Clinical Psychology Review, 10 87105 .
  60. 60. ScaifeM.van DuurenM. 1996 "Because a robot’s brain hasn’t got a brain, it just controls itself"-Children’s attributions of brain related behaviour to intelligent artifacts. European Journal of Psychology of Education 11 36576 .
  61. 61. ShonkoffJ. P.PhillipsD. A.Eds.. 2000 From neurons to neighborhoods: The science of early childhood development. Committee on Integrating the Science of Early Childhood Development. National Academy Press, Washington, DC.
  62. 62. SiegelL. 1993 Amazing new discovery: Piaget was wrong. Canadian Psychology 34 239245 .
  63. 63. SimmonsA. J.GossA. E. 1957 Animistic responses as a function of sentence contexts and instructions. Journal of Genetic Psychology 91 181189 .
  64. 64. SkinnerB. F. 1969 Contingencies of Reinforcement: A Theoretical Analysis. Englewood Cliffs, NJ: Prentice-Hall.
  65. 65. SodianB.BullockM. 2008 Scientific reasoning- Where are we now? Cognitive Development 23 431434 .
  66. 66. SyrdalD. S.KoayK. L.WaltersM. L.Dautenhahn 2009L., Walters, M.L., Dautenhahn, K. 2009. The boy-robot should bark! Children’s impressions of agent migration into diverse embodiments. Proceedings of the New Frontiers in HRI Symposium.
  67. 67. Tamis LeMonda. C.ShannonJ. D.CabreraN. J.LambM. E. 2004 Fathers and mothers at play with their 2- and 3-year-olds: Contributions to language and cognitive development. Child Development 75 18061820 .
  68. 68. Tatsuya Nomura, Takayuki Kanda, Tomohiro Suzuki: Experimental Investigation into Influence of Negative Attitudes toward Robots on Human-Robot Interaction, AI&Society, 20 20 2 Mar.2006
  69. 69. ThomasR. M. 2005 Comparing theories of child development (6th ed.) Belmont: Thomson.
  70. 70. TremouletP. D.FeldmanJ. 2000 Perception of animacy from the motion of a single object. TremouletP. D.FeldmanJ. 2000. Perception of animacy from the motion of a single object. Perception . 29 943951 .
  71. 71. TurkleS. 1995 In: Life on the screen: Identity in the age of the Internet. Simon and Schuster, New York.
  72. 72. TylorE. B. 1871 In: Religion in primitive culture. Primitive culture: 2 John Murray, London. (Reprinted 1958, New York: Harper Torchbooks).
  73. 73. WakitaY.HiraiS.HoriT.TakadaR.KakikuraM. Realization of safety in a coexistent robotic system by information sharing, 1998 Proc. Robotics and Automation, IEEE Int. Conference.
  74. 74. WeissA.WurhoferD.Tscheligi 2009 2009. “I love this dog”- Children’s emotional attachment to the robotic dog AIBO. International Journal of Social Robotics 1 243248 .
  75. 75. WeissA.WurhoferD.BernhauptR.BeckE.Tscheligi 2008 2008. This is a flying shopping trolley: A case study of participatory design with children in a shopping context. Proceedings of the Tenth Anniversary Conference on Participatory Design, 254257 .
  76. 76. XinM.Sharlin 2007 (2007) Playing games with robots- a method for evaluation human-robot interaction. N Sarkar (Ed) Human-robot interaction. Itech Education and Publishing, Vienna, Austria, 469480
  77. 77. YamadaY.UmetaniY.DaitohH.SakaiT. Construction of a human/robot coexistence system based on a model of human will-intention and desire, 1999 Proc. IEEE Int. Conf. on Robotics and Automation.
  78. 78. YamanakaE.MurakamiT.OhnishiK. Cooperative motion control by human and mobile manipulator, 2002 7th International Workshop on Advanced Motion Control. 2002.
  79. 79. YorkJ.VandercookT.StaveK. 1990 Determining favorite recreation/leisure activities. Teaching Exceptional Children, 22(4), 10-13.

Written By

Tanya N. Beran and Alejandro Ramirez-Serrano

Submitted: 14 October 2010 Published: 09 June 2011