Number and percentage of children reporting cognitive features of robot (
Robot arms were originally designed in the 1960s for intended use in a wide variety of industrial and automation tasks such as fastening (e.g., welding and riveting), painting, grinding, assembly, palleting and object manipulation). In these tasks humans were not required to directly interact or cooperate with robot arms in any way. Robots, thus, did not require sophisticated means to perceive their environment as they interacted within it. As a result, machine type motions (e.g., fast, abrupt, rigid) were suitable with little consideration made of how these motions affect the environment or the users. The application fields of robot arms are now extended well beyond their traditional industrial use. These fields include physical interactions with humans (e.g., robot toys) and even emotional support (e.g., medical and elderly services).
In this chapter we begin by presenting a novel motion control approach to robotic design that was inspired by studies from the animal world. This approach combines the robot’s manipulability aspects with its motion (e.g., in case of mobile robots such as humanoids or traditional mobile manipulators) to enable robots to physically interact with their users while adapting to changing conditions triggered by the user or the environment. These theoretical developments are then tested in robot-child interaction activities, which is the main focus of this chapter. Specifically, the children’s relationships (e.g., friendship) with a robotic arm are studied. The chapter concludes with speculation about future use and application of robot arms while examining the needs for improved human-robot interactions in a social setting including physical and emotional interaction caused by human and robot motions.
2. Bio-inspired control for robot arms: simple and effective
2.1. Background: human robot interactive control
There are many different fields of human-robot interaction that have been developed within the last decade. The intelligent fusion scheme for human operator command and autonomous planner in a telerobotic system is based on the event based planning introduced in Chuanfan, 1995. This scheme integrates a human operator control command with an action planning and control for autonomous operation. Basically, a human operator passes his/her commands via the telerobotic system to the robot, which, in turn, executes the desired tasks. In many cases both an extender and material handling system are required during the implementation of tasks. To achieve proper control, force sensors have
been used to measure the forces and moments provided by the human operator [e.g., Kim, 1998]. The sensed forces are then interpreted as the desired motion (translational and rotational) while the original compliant motion for the robot remains effective. To improve previous works, video and voice message has been employed, [e.g., Wikita, 1998], for information sharing during the human-robot cooperation. The projection function of the video projector is to project the images of the messages from the robot into an appropriate place. The voice message has the function to share the event information from the robot to the human. Fukuda et al. proposed a human-assisting manipulator teleoperated by electromyography [Fukuda, 2003]. The works described above simplify the many different applications in the field of human-robot interaction. The control mechanism presented herein allows robots to cooperate with humans where humans practically employ no effort during the cooperation task (i.e., minimal effort during command actions). Moreover, in contrast to previous work, where the human-robot cooperation takes place in a well structured engineered environment, the proposed mechanism allows cooperation in outdoor complex/rough terrains.
Human-robot arm manipulator coordination for load sharing
Several researchers have studied the load sharing problem in the dual manipulator coordination paradigm [e.g., Kim, 1991]. Unfortunately, these results cannot be applied in the scope of the human-arm-manipulator coordination. The reason is that in the dual manipulator coordination, the motions of the manipulators are assumed to be known. However, in the human-arm-manipulator coordination, the motion of the object may be unknown to the manipulator. A number of researchers have explored the coordination problem between a human arm and a robot manipulator using
The basic ability for a robot to cooperate with a human is to respond to the human’s intentions. Complaint motion control has been used to achieve both load sharing and trajectory tracking where the robot’s motion along a specific direction is called complaint motion. This simple but effective technique can be used to guide the robot as it attempts to eliminate the forces sensed (i.e., precise human-robot interaction). However, diverse problems might occur that require different control approaches.
The problem in the framework of model-based predictive control for human-robot interaction has been addressed in numerous papers [e.g., Iqbal, 1999]. First, the transfer function from the manipulator position command to the wrist’s sensor force output is defined. Then, the desired set point for the manipulator force is set to equal the gravitational force. Numerous results reported in the literature indicate that predictive control allows the manipulator to effectively take over the object load, and the human’s forces (effort) stays close to zero. Moreover, manipulators have been shown to be highly responsive to the human’s movement, and relatively small arm force can effectively initiate the manipulation task. However, difficulties still remain when sudden large forces are exerted to the robot to change the motion of the shared object (load) as the robot arm acts as another automated load to the human.
Al-Jarrah  proposed reflexive motion control for solving the loading problem, and an extended reflexive control was shown to improve the speed of the manipulator in response to the motion of the human. The results show that the controller anticipated the movements of the human and applied the required corrections in advance. Reflexive control, thus, has been shown to assist the robot in comprehending the intentions of the human while they shared a common load. Reflexive motion is an inspiration from biological systems; however, in reflexive motion control it is assumed that the human and the manipulator are both always in contact with an object. That is, there is an object which represents the only communication channel between the robot and the human. This is not always possible. Thus, mechanisms that allow human-robot cooperation without direct contact are needed.
In an attempt to enhance pure human-robot arm cooperation, human-mobile manipulator cooperation applications have been proposed [e.g., Jae, 2002; Yamanaka, 2002; Hirata, 2005; Hirata, 2007]. Here the workspace of the cooperation is increased at the expense of the added complexity introduced by the navigation aspects that need to be considered. Accordingly, humans cooperate with autonomous mobile manipulators through intention recognition [e.g., Fernandez, 2001]. Herein mobile-manipulators refer to ground vehicles with robot arms (Fig. 1a), humanoid robots, and aerial vehicles having grasping devices (Fig. 1b). In contrast to human-robot arm cooperation, here the cooperation problem increases as the mobile manipulator is not only required to comply with the human’s intentions but simultaneously perceives the environment, avoids obstacles, coordinates the motion between the vehicle and the manipulator, and copes with terrain/environment irregularities/uncertainties, all of this while making cooperation decisions, not only between human and robot but also between the mobile-base and robot arm in real-time. This approach has been designated as active cooperation where diverse institutions are running research studies. Some work extends the traditional basic kinematic control schemes to master-slave mechanisms where the master role of the task is assigned to the actor (i.e., human) having better perception capabilities. In this way, the mobile manipulator not only is required to comply with the force exerted by the human while driving the task, but also contributes with its own motion and effort. The robot must respond to the master’s intention to cooperate actively in the task execution. The contribution of this approach is that the recognition process is applied on the frequency spectrum of the force-torque signal measured at the robot’s gripper. Previous works on intention recognition are mostly based on monitoring the human’s motion [Yamada, 1999] and have neglected the selection of the optimal robot motion that would create a true human-robot interaction, reducing robot slavery and promoting human-robot friendship. Thus, robots will be required not only to help and collaborate, but to do so in a friendly and caring way. Accordingly, the following section presents a simple yet effective robot control approach to facilitate human-robot interaction.
2.2. Simple yet effective approach for friendly human-robot interaction
The objective of this section is to briefly present, without a detailed mathematical analysis, a simple yet effective human-robot cooperation control mechanism capable of achieving the following two objectives:
Many solutions have been developed for human-robot interaction; however, current techniques work primarily when cooperation occurs on simple engineered environments, which prevents robots from working in cooperation with humans in real human settings (e.g., playgrounds). Despite the fact that the control methodology presented in this section can be used in a number of mobile manipulators (e.g., ground and aerial) cooperating with humans, herein we focus on the cooperation between a human and a robot arm in 3D dimensions. This application requires a fuzzy logic force velocity feedback control to deal with unknown nonlinear terms that need to be resolved during the cooperation. The fuzzy force logic control and the robot’s
It has been found by a number of researchers that impedance control is superior over explicit force control methods (including hybrid control). However, impedance control pays the price of accurate force tracking, which is better achieved by explicit force control. It has also been shown that some particular formulations of hybrid control appear as special cases of impedance control and, hence, impedance control is perceived as the appropriate method for further investigation related to human-robot arm cooperation. Hybrid motion/force control is suitable if a detailed model of the environment (e.g., geometry) is available. As a result, the hybrid motion/force control has been a widely adopted strategy, which is aimed at explicit position control in the unconstrained task direction and force control in the constrained task direction. However, a number of problems still remain to be resolved due to the explicit force control in relation to the geometry.
Control architecture of human robot arm cooperation
To address the problems found in current human-robot cooperation mechanisms, a new control approach is described herein. The approach uses common known techniques and combines them to maximize their advantages while reducing their deficiencies. Figure 2 shows the proposed human-mobile robot cooperation architecture that is used in its simplified version in human-robot arm cooperation described in Section 3.
In this architecture the human interacting with the robot arm provides the external forces and moments to which the robot must follow. For this, the human and the robot arm are considered as a coupled system carrying a physical or virtual object in cooperation. When a virtual object is considered, virtual forces are used to represent the desired trajectory and velocities that guide the robot in its motion. In this control method the human (or virtual force) is considered as the master while the robot takes the role of the slave. To achieve cooperation, the changes in the force values, which can be measured via a force/torque (F/T) sensor, must be initialized before starting the cooperation. Subsequently, when the cooperation task starts, the measured forces will, in general, be different than the initialized values. As a result, the robot will attempt to reduce such differences to zero. According to the force changes, the robot determines its motion (trajectory and velocity) to compensate the changing in F/T values. Thus, the objective of the control approach is to eliminate (minimize) the human effort in the accomplishment of the task. When virtual forces are used instead of direct human contact with the robot the need to re-compute the virtual forces is eliminated.
Motion decomposition of the end-effector
Thus the goal is to maintain the manipulability of the arm (and the
Control architecture of human-mobile manipulator cooperation
To finalize this section the cooperation between a human and a mobile manipulator is described for completeness. The motion of a mobile base is subject to
The control system of the manipulator for human-robot cooperation/interaction was designed considering the operational force by the human (operator) and the contact force between the manipulator and the mobile robot. The interacting force can be measured by a F/T sensor which can be located between the final link of the manipulator and the end-effector (i.e., the wrist of the manipulator). The human force and the operational force applied by the human operator denote the desired force for the end-effector to move while compensating the changing in the forces. The final motion of the manipulator is determined by the desired motion by the human force controller. To allow the arm to be more reactive to unknown changes (due to the human and the environment) the manipulability of the arm must be continuously computed. As the arm approaches the limits of its working environment the motion of the mobile manipulator relies more on the mobile base rather than the arm. In this way, the arm is able to reposition itself in a state where it is able to move reactively. In the experiments used in the next section the mobile base was removed. This facilitated the tests while simultaneously enhancing the cooperation.
The above control mechanism (Fig. 2) not only enhances human-robot cooperation but also enhances their interaction. This is due to the fact that the robot reacts not only to the human but also to the environmental conditions. This control mechanism was implemented in the studies presented in the following section.
3. Children’s relationships with robots
We designed a series of experiments to explore children’s cognitive, affective, and behavioral responses towards a robot arm under a controlled task. The robot is controlled using a virtual force representing a hypothetical human-robot interaction set a priori. The goal of using such control architecture was to enable the robot to appear dexterous, flexible while operating with smooth, yet firm biological type motions. The objective was to enhance and facilitate the human-robot cooperation/interaction with children.
3.1. Series of experiments
A robot arm was presented as an exhibit in a large city Science Centre. This exhibit was used in all the experimental studies. The exhibit was enclosed with a curtain within a 20 by 7 foot space (including the computer area). A robot arm was situated on a platform with a chair placed.56 meters from its 3D workspace to ensure safety. Behind a short wall of the robot arm was one laptop used to run the commands to the robotic arm and a second laptop connected to a camera positioned towards the child to conduct observations of children’s helping and general behaviors.
All three studies employed a common method. A researcher randomly selected visitors to invite them to an exhibit. The study was explained, and consent was obtained. Each child was accompanied behind a curtain where the robot arm was set up, with parents waiting nearby. Upon entering an enclosed space, the child was seated in front of a robot arm. Once the researcher left, the child then observed the robot arm conduct a block stacking task (using the bio-inspired motion control mechanisms described in Section 2). After stacking five blocks, it dropped the last block, as programmed.
Design and characteristics of the employed robot arm
The robot arm used in these experiments was a small industrial electric robot arm having 5 degrees of freedom where pre-programmed bio-inspired control mechanisms were implemented. To aesthetically enhance the bio-inspired motions of the robot the arm was “dressed” in wood, corrugated cardboard, craft foam, and metal to hide its wires and metal casing. It was given silver buttons for eyes, wooden cut-outs for ears, and the gripper served as the mouth. The face was mounted at the end of the arm, creating an appearance of the arm as the neck. Gender neutral colors (yellow, black, and white) were given to a non-specific gender. Overall, it was decorated to appear pleasant, without creating a likeness of an animal, person, or any familiar character yet having smooth natural type motions.
In addition to these physical characteristics, its behaviour was friendly and familiar to children. That is, it was programmed to pick up and stack small wooden blocks. Most children own and have played with blocks, and have created towers just as the robot arm did. This familiarity may have made the robot arm appear endearing and friendly to the children.
The third aspect of the scenario that was appealing to the children was that it was programmed to exhibit several social behaviours. Its face was in line with the child’s face to give the appearance that it was looking at the child. Also, as it picked up each block with its grip (decorated as the mouth), it raised its head to appear to be looking at the child before it positioned the block in the stack. Such movement was executed by the robot by following a virtual pulling force simulating how a human would guide another person when collaborating in moving objects. Then, as it lifted the third block, the mouth opened slightly to drop the block and then opened wider as if to express surprise at dropping it. It then looked at the child, and then turned towards the platform. In a sweeping motion it looked back and forth across the surface to find the block. After several seconds it then looked up at the child again, as if to ask for help and express the inability to find the block.
The child’s reactions to the robot arm were observed and recorded. Then the researcher returned to the child to conduct a semi-structured interview regarding perceptions of the robot arm. In total, 60 to 184 boys and girls between the ages of 5 to 16 years (
Only a generation ago, children spent much of their leisure time playing outdoors. These days, one of the favourite leisure activities for children is using some form of advanced technological device (York, Vandercook, & Stave, 1990). Indeed, children spend 2-4 hours each day engaged in these forms of play (Media Awareness Network, 2005). Robotics is a rapidly advancing field of technology that will likely result in mass production of robots to become as popular as the devices children today enjoy. With robotic toys such as Sony’s AIBO on the market, and robots being developed with more advanced and sensitive responding capabilities, it is crucial to ask how children regard these devices. Would children act towards robots in a similar way as with humans? Would children prefer to play with a robot than with another child? Would they develop a bond with a robot? Would they think it was alive? Given that humans are likely to become more reliant upon robots in many aspects of daily life such as manufacturing, health care, and leisure, we must explore their psycho-social impact. The remainder of this chapter takes a glimpse on this potential impact on children by determining their reactions to a robot arm. Specifically, this section will explain whether children would offer assistance to a robot, perceive a robot as having humanistic qualities, and would consider having a robot as a friend.
Study 1: Assistance to a Robot Arm
Helping, or prosocial behaviours are actions intended to help or benefit another individual or group of individuals (Eisenberg & Mussen, 1989; Penner, Dovidio, Pilavin, & Schroeder, 2005). With no previous research to guide us, we tested several conditions in which we believed children would offer assistance (see Beran et al. 2011). The one reported here elicited the most helping behaviors.
Upon sitting in front of the robot arm the researcher stated the following:
Are you enjoying the science centre? What’s your favorite part?
This is my robot (researcher touches platform near robot arm). What do you think?
My robot stacks blocks (researcher runs fingers along blocks).
I’ll be right back.
The researcher then exited and observed the child’s behaviors on the laptop. A similar number of children, who did not hear this introduction, formed the comparison group. As soon as children in each group were alone with the robot arm, it began stacking blocks. A significantly larger number of children in the introduction group (
Study 2: Animistic impressions of a Robot Arm
Animism as a typical developmental stage in children has been studied for over 50 years, pioneered by Piaget (1930; 1951). It refers to the belief that inanimate objects are living. This belief, according to Piaget, occurs in children up to about 12 years of age. The disappearance of this belief system by this age has been supported by some studies (Bullock, 1985; Inagaki and Sugiyama, 1988) but not others (Golinkoff et al., 1984; Gelman and Gottfried, 1983). Nevertheless, the study of animism is relevant in exploring how children perceive an autonomous robot arm.
Animism can be divided and studied within several domains. These may include cognitive (thoughts), affective (feelings), and behavioural (actions) beliefs, known as schemata. In other words, people possess schemata, or awareness, that human beings have abilities for thinking, feeling, and acting. More specifically, thinking abilities may include memory and knowledge; feeling abilities include pleasant and unpleasant emotions; and behaviour abilities can refer to physical abilities and actions. Melson et al. (2009) provide some initial insights into several of these types of beliefs children hold towards a robotic pet (Sony’s AIBO). Also, Melson et al. (2005) found that many children believed that such a robot was capable of the feelings of embarrassment and happiness, as well as recognition. Additional evidence of animism towards a robot was obtained by Bumby and Dautenhahn (1999) who reported that children may include human characteristics about robots in stories they create.
The most recent study on animism presents surprising insights about animism. A team of researchers from the University of Washington's Institute for Learning and Brain Sciences [I-LABS, 2010] found that, “babies can be tricked into believing robots are sentient”. The researchers used a remote-controlled robot in a skit to act in a friendly manner towards its human (i.e., adult) counterpart. When the baby was left alone with the robot, in 13 out of 16 cases the baby followed the robot's gaze, leaving researchers to conclude that the baby believed it was sentient. We extend these insightful findings of animism to children’s cognitive, affective, and behavioural beliefs about a robot arm in the present study.
Responses to questions about the arm’s appearance and animistic qualities were coded for this study. Two raters were used to determine the reliability of the coding, with Cohen’s Kappa values ranging from 0.87 to 0.98, with a mean of 0.96 indicating very good inter-rater agreement. The majority of children identified the robot as male, and less than a quarter of the children identified it as female. One child stated the robot was neither, and about 10% did not know. The child’s sex was not related to their response. About a third of the children assigned human names to the robot such as ‘Charlie’. About a third gave names that refer to machines, such as ‘The Block Stacker’. A pet name was rarely assigned, such as ‘Spud’, or a combined human-machine type name ‘Mr. Robot’. When asked about their general impressions of the robot, a large majority gave a positive description, such as cool/awesome, good/neat, nice, likeable, interesting, smart, realistic, super, fascinating, and funny. Two children reported that the robot had a frightening appearance, and three children thought it looked like a dog. Another 17 did not provide a valid response.
Regarding its cognitive characteristics, more than half of the children stated the robot had recognition memory due to the ability to see their face, hair, and clothes; and that the robot was smart and had a brain (see Table 1). Other children provided a mechanical reason by stating it had a memory chip, camera, or sensors, or may have been programmed. Over a third of the children stated the robot could not remember them, for various reasons shown in the table. Children’s perceptions about the robot’s cognitive abilities in regards to knowledge are also shown in Table 1. About half of the children thought the robot did not have this capability, due to reasons such as not having a brain or interactions with them. Almost a third indicated that they believed the robot does know their feelings for various reasons such as from seeing the child and being programmed with this ability.
Regarding affective characteristics, the majority of children thought that the robot liked them, as shown in Table 2. A few children believed that the robot did not like them. Similarly, the majority of children reported that they thought the robot would feel left out if they played with a friend. Over a quarter of the children stated the robot would not feel left out, but provided explanations that would seemingly protect the robot from harm.
|Robot can remember you||Robot knows your feelings|
|Yes||97 (52.7%)||Yes||54 (29.3%)|
|Can see me||37||Can see me||18|
|Has memory chip, sensors||15||Has memory chip, sensors||5|
|Smart, has brain||3||Smart, has brain||3|
|If has a brain||6||Do not know why||17|
|If short duration||5||Not coded||11|
|Do not know why||24|
|No||68 (37.0%)||No||103 (56.0%)|
|No brain, eyes, or memory||30||No brain, eyes, or memory||37|
|Too many people to remember||14||No interaction with me||19|
|Robot does not like me||3||If not programmed||8|
|If no brain||3||Do not know why||31|
|If long duration||2||Not coded||8|
|If not programmed||2|
|Do not know why||11|
|Do not know||19 (10.3%)||Do not know||27 (14.7%)|
|Robot likes you||Robot feels left out|
|Yes||118 (64.0%)||Yes||127 (69.0%)|
|Looks/smiles at me, friendly||38||No one to play with||62|
|I was nice/did something nice||20||Hurt feelings||36|
|Did not hurt me||13||I would include robot||9|
|It had positive intentions||9||Not fair||2|
|Do not know why||33||Do not know why||11|
|Not coded||5||Not coded||7|
|No||16 (8.7%)||No||53 (28.8%)|
|Ignored me/didn’t let me help||10||No thoughts/feelings||29|
|No thoughts/feelings||4||Would include robot||16|
|Do not know why||2||Does not understand||3|
|Not coded||0||Do not know why||5|
|Do not know||50 (27.3%)||Do not know||4 (2.2%)|
In regards to its behavioral characteristics (Table 3), more than a third of the children stated the robot was able to see the blocks, with just over half of the children indicating that the robot could not see the blocks. A higher endorsement of the robot’s ability to show action is evident in the table. That is, a large majority stated the robot could play with them, and even provided a variety of ideas for play. Examples include block building, and Lego®, catch with a ball, running games, and puzzles.
|Robot sees blocks||Robot plays with you*|
|Yes||77 (41.8%)||Yes||154 (83.7%)|
|Sensors, camera||13||Running game||12|
|Do not know why||7||Do not know why||5|
|Not coded||0||Not coded||5|
|No||94 (51.1%)||No||25 (13.6%)|
|Eyes not real||49||Physical limitation||11|
|Missed a block||19||Do not know why||6|
|Do not know why||5|
|Do not know||13 (7.1%)||Do not know||5 (2.7%)|
To further determine whether children considered the robot to be animate or inanimate, we analyzed the pronouns children used when talking about the robot arm. Almost a quarter of the children used the pronoun “it” in reference to the robot, another quarter stated “he”, and half used both.
In summary, children seemed to adopt many animistic beliefs about the robot. Half thought that it would remember them, and almost a third thought it knew how they were feeling. Affective characteristics were highly endorsed. More than half thought that the robot liked them and that it would feel rejected if not played with. In their behavioral descriptions, more than a third thought it could see the blocks, and more than half thought the robot could play with them. It is evident that children assigned many animistic abilities to the robot, but were more likely to ascribe affective than cognitive or behavioral ones. There was additional evidence of human qualities according to the names children gave it, their descriptions of it, and the pronouns they used to reference it in their responses. These animistic responses, moreover, were more apparent in younger than older children.
Although some responses suggest that children believed the robot held human characteristics because of programming and machine design, the majority of statements referred to human anatomy (e.g., eyes, facial features, and brain), emotions, and intentions. We explain these findings in two ways. First, the robot arm presented many social cues. That is, the eyes were at the same eye level as the children’s, giving the impression of ‘looking’ at child, and it returned to this position many times while scanning for the block. Children may have interpreted this movement as expression of interest and closeness, which is one of the reactions to frequent eye contact among people (Kleinke, 1986; Marsh, 1988). Second, children may have projected their own feelings, thoughts, and experiences onto the robot arm, which Turkle (1995) has reported may occur with robots. This was particularly evident in the surprising finding that so many children believed that the robot would feel rejected and lonely if not included in play, as well as that the arm could engage in forms of play that clearly it could not (e.g., running). Third, children may have lacked knowledge of terms and principles to explain the robot’s actions, thereby relying on terms that express human qualities such as ‘remembering’, ‘knowing’, and ‘liking’. Fourth, because the arm moved autonomously, children may have developed the impression that it has intentions and goals, as is a typical reaction to any independently moving object (Gelman, 1990; Gelman and Gottfried, 1996; Poulin-Dubois and Shultz, 1990).
Study 3: Children’s impressions of friendship towards a Robot Arm
Friendships are undoubtedly important for childhood development, and, as such, set the stage for the development of communication skills, emotional regulation, and emotional understanding (Salkind, 2008). In this study, and given the animistic responses obtained in the previous study, we set out to determine the extent to which children would hold a sense of positive affiliation, social support, shared activities, and communication towards a robot; all of which exemplify friendship. In addition, we questioned whether children would share a secret
|Robot can cheer you up||Robot can be your friend|
|Yes||145 (78.8%)||Yes||158 (85.9%)|
|Perform action for me||61||Conditional||31|
|Perform action with me||12||Being or doing things together||30|
|Connects with me||20||Knows me||12|
|Do not know why||17||Friendly||6|
|No||27 (14.7%)||Friend to robot||4|
|Limited abilities||16||Do not know why||28|
|Does not like me||1||Not coded||12|
|No brain, feelings||4|
|Do not know why||8||Do not know why||4|
|Not coded||2||Not coded||3|
|Do not know||12 (6.5%)||Do not know||7 (3.8%)|
with a robot, as this behavior may also signify friendship (Finkenauer, Engels & Meeus, 2002).
As shown in Table 4, more than three quarters of the children stated that the robot could improve their mood, with reasons varying from its actions to its appearance. Moreover, more than three quarters stated the robot could be their friend. Many reasons were given for this possibility. They included enjoying activities together, helping each other, kindness, likeability, and shared understanding.
According to Table 5, the majority of children stated they would talk to the robot and share secrets with it. Most children had difficulty explaining their reasons for their answers. Rather, they provided answers that described what they would talk about, such as what to play together. Interestingly, many children stated that they liked the robot and wanted to spend time becoming acquainted. This desire for a greater connection to the robot is also exemplified in their responses to sharing secrets. More than a third of the children stated they would tell the robot a secret. Some children (n = 24) stated that they thought it was wrong to tell secrets, suggesting that of those children who would generally tell secrets (n = 160), half of them (n = 84, 52.50%) would tell a robot. The most frequent reason given was because they believed that the robot would not share it – seemingly because the robot arm could not speak. Many of them also stated, however, that they considered the robot arm to be friendly.
|Talk to robot||Tell robot secrets|
|Yes||124 (67.4%)||Yes||84 (45.7%)|
|I like the robot||16||Robot will keep secret||30|
|To get to know each other||6||Friendship with robot||13|
|Robot has mouth||6||Positive response to secret||7|
|If robot could talk||22||Other||4|
|Do not know why||37||Do not know why||22|
|Not coded||7||Not coded||8|
|No*||53 (28.8%)||No||92 (50.0%)|
|Robot cannot talk||20||Secrets are wrong||24|
|Robot cannot hear||6||Robot has limitations||18|
|Not human||5||Robot not trustworthy||24|
|Looks unfriendly||9||Robot is not alive||9|
|Do not know why||11||Do not know why||12|
|Not coded||4||Not coded||5|
|Do not know||7 (3.8%)||Do not know||8 (4.3%)|
The majority of children responded affirmatively to questions about affiliation, receiving support, communicating, and sharing secrets, which typically characterize friendship. Regarding affiliation, almost two thirds of the children thought the robot liked them, and many explained that it was because the robot appeared friendly. Children also attributed positive intentions to the robot, likely because it was moving independently and engaging in a child friendly task. More than three quarters of the children did believe that the robot could offer them support. The action of stacking blocks was often explained as a means of providing this support, perhaps to distract and entertain the child. A large majority of children stated that they would play with the robot in a variety of games. It is not surprising that many of them suggested building with blocks, considering that they had just observed this activity. Finally, about two thirds of the children stated they would talk to the robot and more than a third stated that they would share secrets. Again, these results suggest that children are willing to develop a bond with the robot.
Many children in our study stated that they would not engage in these friendship-behaviors with a robot and explained that the robot did not have the capabilities to do so. Reasons for these different perceptions of the robot have not been explored in the research but may plausibly include variation in children’s knowledge of the mechanics of robots. In addition, a considerable proportion of children did not or could not provide an answer to the questions about friendship. It is possible that children were unable to differentiate human from robot characteristics, lacked sufficient understanding about the mechanics of robots, or were generally confused about the robot’s abilities. Our use of terms in the interview, such as whether the robot would ‘feel left out’, describe human characteristics and may have mislead children into positively responding. Clearly, the results raise many questions for research, not the least of which is whether children actually
4. Implications for robot design
The fact that so many children ascribed life characteristics to the robot suggests that they have high expectations of them and are willing to invite them into their world. This presents a challenge to robot designers to match these expectations, if the purpose of the robot is to garner and maintain interest from children. Children may be primed for these interactions. In fact, children may become frustrated when a robot does not respond to their initiations and may actually persevere at eliciting a response (Weiss et al., 2009). Therefore, the robot may not need to be programmed to respond in an identical fashion to a specific initiation, as humans certainly do not, which may actually increase the child’s engagement with a robot. This principle is well known as variable ratio reinforcement according to behaviourism learning theory (Skinner, 1969). Of course, children may become discouraged if the robot’s response is erratic. Instead, we propose that a high, but not perfectly predictable response to the child’s behaviours will lead to the longest and most interesting interactions.
In addition, our studies suggest that children can develop a collaborative relationship with a robot when playing a game together. This gives some suggestion of the nature of the relationship children may enjoy with a robot: one that allows give and take (Xin & Sharlin, 2007). This may enhance a child’s sense of altruism and, hence, increase engagement with it. It is, thus, recommended that developers of such robots consider designing them to not only offer help, but be able to receive it.
The studies and tests reported in this chapter have certain limitations that one must consider when interpreting the results. The tests and associated observations made during the study can be reproduced by using a variety of robot arms and even include mobile manipulators from where more detailed children-robot interaction studies can be made. Although the bio-inspired control mechanism used in this study worked well, tests using such control approaches should be performed on other robot types including humanoids and mobile robots. Such control architecture should also be tested in physical children-robot interaction to determine its suitability towards enabling seamless active engagement between children (humans) and robots.
Robot arms have indeed changed from their original industrial and automotive applications in the 1960s. Our studies show that children are ready to accept them as social objects for sharing personal information, offering mutual support and assistance, and regarding them as human in various ways. In the near future, we expect that humans will not only frequently and directly interact with and rely on robot arms and robots of diverse types for daily activities, but perhaps treat them and regard them as possibly human. Our studies cannot begin to address the numerous complex questions about the nature of the interactions people will have with robots. We offer a glimpse, however, of children’s willingness to do so. Overall, the results are rather surprising given that the robot arm did not speak, performed only one task, and did not initiate physical interaction with the child. Are children merely responding to the robot arm as if it is a fancy puppet, and they are presenting their imagination in their responses? Perhaps, but regardless of the explanation, children in these studies demonstrated overwhelmingly their predisposition towards active engagement for bio-inspired motion control.
We give special thanks to the TELUS World of Science - Calgary for collaborating with us. This research would not have been possible without their support.