Open access peer-reviewed chapter

Socially Believable Robots

Written By

Momina Moetesum and Imran Siddiqi

Submitted: 14 March 2017 Reviewed: 29 September 2017 Published: 04 July 2018

DOI: 10.5772/intechopen.71375

From the Edited Volume

Human-Robot Interaction - Theory and Application

Edited by Gholamreza Anbarjafari and Sergio Escalera

Chapter metrics overview

1,586 Chapter Downloads

View Full Metrics

Abstract

Long-term companionship, emotional attachment and realistic interaction with robots have always been the ultimate sign of technological advancement projected by sci-fi literature and entertainment industry. With the advent of artificial intelligence, we have indeed stepped into an era of socially believable robots or humanoids. Affective computing has enabled the deployment of emotional or social robots to a certain level in social settings like informatics, customer services and health care. Nevertheless, social believability of a robot is communicated through its physical embodiment and natural expressiveness. With each passing year, innovations in chemical and mechanical engineering have facilitated life-like embodiments of robotics; however, still much work is required for developing a “social intelligence” in a robot in order to maintain the illusion of dealing with a real human being. This chapter is a collection of research studies on the modeling of complex autonomous systems. It will further shed light on how different social settings require different levels of social intelligence and what are the implications of integrating a socially and emotionally believable machine in a society driven by behaviors and actions.

Keywords

  • social robots
  • human computer interaction
  • social intelligence
  • cognitive systems
  • anthropomorphism
  • humanoids
  • roboethics

1. Introduction

Robots have been an important part of the industrial setups around the globe for many years now. For many industrial operations, robots have completely or partially replaced the human operators and their involvement is likely to grow manifolds in the years to come. Nevertheless, in most cases, these robots operate in a controlled work environment and their interaction with humans remains fairly limited. The recent advancements in the hardware (actuators, sensors, etc.) and software technologies (computer vision, artificial intelligence, etc.), however, have paved way for involvement of robots in our daily life, both at work place and home. Such robots, contrary to the industrial robots, naturally require more interactions with humans and have to be designed accordingly. The term “social robot” was coined jointly by researchers in artificial intelligence and robotics in the early 90s and refers to the robots engaging in social interactions with the humans. Studies [1, 2] define social robots as autonomous agents designed to interact with humans and possibly other robots exhibiting the expected social behaviors of the assigned role. Such interactions, in addition to the primary expected tasks, involve communication, recognition of individuals, familiarization with the environment and adapting accordingly to the variety of situations encountered. In order to enable them to interact socially, these robots need to be equipped with what is generally termed as “social intelligence”. Lazzeri et al. [3] argue that this social intelligence enables robots not only to converse with humans (and other robots) but also interpret the emotional signals and react accordingly hence producing an impression of a real human being. In addition to the conventional role of serving humans, other typical roles include providing guidance or assistance at homes, offices or public places, provide companionship and care services and serve as pets. The expectations from a social robot naturally vary as the function of the role it takes.

Breazeal [4] argues that humans tend to anthropomorphize robots for interaction and identifies four classes of social robots. These include “socially evocative”, “social interface”, “socially receptive” and “sociable robots”. Socially evocative robots, for instance toy robots, are designed to engage in entertaining interactive sessions with the humans. According to him, socially interfaced robots, for instance guides at the airports, provide human-like conversational interaction that in addition to speech also involves body language and facial expressions. Socially receptive robots learn and enhance their social intelligence through interactions while sociable robots are highly participative to satisfy their own social aims. In addition to these four categories, Fong et al. [1] have identified three further categories including “socially situated”, “socially embedded” and “socially intelligent” robots. They further describe a new breed of social robots called “socially interactive robots” that comprise of some common attributes with additional distinctive characteristics of their own. Based on the different categorizations suggested in the literature, we can identify taxonomy of social robots as illustrated in Figure 1. This chapter is dedicated to a discussion on the design considerations and applications of socially believable robots with a discussion on the associated challenges and the future prospects. Case studies and examples of social robots in entertainment, health and education will also be discussed.

Figure 1.

Taxonomy of social robots in literature.

Advertisement

2. Design considerations

Every passing decade is forcing robot designers and engineers to push their skills to the limit. As robots integrate further into our lives, high expectations are posing new challenges in their creation. All robots, whether industrial, field or social, must address a number of design issues. However factors of social believability and social intelligence increase the complexity of designing a socially interactive robot. One of the foremost conditions of believability in a social robot is its near realistic embodiment, to which users can relate without reluctance or discomfort. Secondly a socially interactive robot is expected to be expressive in terms of rich dialog, emotions and gestures. In addition to expressiveness, a social robot is required to manifest social behavior which includes perception of its surroundings and ability to plan and execute appropriate goal oriented actions. Variance in social situations and expected performance outcomes make it difficult to generalize design strategy for a social robot. Nevertheless designers broadly divide design approaches into two categories i.e. Bio-inspired and Function-inspired [1]. Bio-inspired design strategies are a multitude of disciplines like anthropology, cognition, psychology and sociology. On the other hand function-inspired approaches focus on task oriented designs. However, realizing the gap between available technology and performance expectations is of prime significance.

2.1. Embodiment and expressiveness

According to Fong et al. [1], a robot’s visual appearance is the first projection of its believability. People establish performance expectations based on a robot’s outlook. In a way, physical embodiment influences human robot interactions as people interact with humanoids differently from non-humanoids. Other than expectations, a robot’s morphology plays a vital role in its usability, acceptability and expressiveness. Therefore it is required that a robot’s morphology should correlate to its proposed functionality. For instance, robots that are intended to carry out human like tasks must be equipped accordingly; visual human likeness may not be of much importance in such cases as in the case of ATLAS and similar humanoids (Figure 2a). On the other hand those designed for interaction purposes must be more human like, with distinct facial expressions (e.g. Sophia) (Figure 2b) or with emotional speech capabilities (e.g. Pepper) (Figure 2c).

Figure 2.

Advance humanoids: (a) ATLAS, (b) Sophia, (c) Pepper.

With the aim to achieve a naturalistic embodiment, designers get inspiration from nature itself. Morphological design of natural looking social robots can be attributed to anthropomorphism. Based on their area of application, morphological inspirations for a robot’s outlook can also be taken from zoomorphism (e.g. pets or creatures), caricature (e.g. animations or fictional characters) and functional expectations (e.g. assistive or service robots etc.). Nevertheless most social robots are intended to work with humans; thus the general notion is to give them a human-like appearance. Therefore we will emphasize more on anthropomorphism.

2.1.1. Anthropomorphism

Anthropomorphism is the provenance of human characteristics in something non-human. According to Fink [5], anthropomorphism can be introduced in all three aspects of a robot’s design i.e. morphology, behavior and interaction.

2.1.1.1. Humanoid head

The most effective anthropomorphic feature of a robot is its head. To project human likeness and better expressiveness, the simplest kind of humanoid robot heads are equipped with RGB LEDs, cameras, microphones and speakers. These mechanical parts are mostly cost-effective and provide a variety of expressions for more naturalistic human robot interaction. DARwIn-OP, HOAP-3, Pepper, NAO, UXA-90, Roboy and ASIMO are some of the examples of humanoids with faces equipped with LEDs and speakers. Perception of emotions by humans, while interacting with these robots is at times difficult due to limited modes of expressions giving unrealistic or mechanical effect. Some humanoids are provided with kinematic heads. These humanoid heads can perform transformations from one emotional state to another by tilting head, moving eyes and mouth etc. In contrast to LEDs, these heads are equipped with actuators and moving parts which work in intense coordination. Romeo, iCUB, Simon, RoboThespian, MERTZ and KOBIAN RII, are some of the humanoids featuring kinematic head with moving eyelids, eyebrows, jaws and neck. In quest to manifest ultimate humanness, roboticists experimented with animatronic heads with flexible skin. Alice, Albert HUBO, Roman and Actroid are some of the examples of robots with animatronics head consisting of several DC motors and artificial skin made of special material called Frubber. Figure 3 shows three different kinds of humanoid heads with their ability to express emotions. Nevertheless such humanoid heads have a tendency to make users uncomfortable, as suggested in Mori’s “Uncanny Valley” theory [6].

Figure 3.

Comparison of three humanoid faces based on emotion expression capabilities [7].

2.1.1.2. Whole-body dynamics and control

The idea of substituting humans with surrogates for tasks like search and rescue in challenging scenarios has been prevailing for some time now [8]. With the introduction of socially interactive and socially assistive robots, designing humanoids to be autonomous has become inevitable. In a rich social setup, robots require high level of autonomy including extended physical mobility. Although robots are becoming more sophisticated both mechanically and emotionally, yet they are still far from achieving agile human-like manipulation and interaction, thus providing significant research potential in these areas. Dimensions of robot’s body (i.e. height and weight etc.), Degree of Freedom (DoF), tactile sensors, number and flexibility of joints are the design factors that determine its mobility (i.e. walking, sitting, standing and turning etc.) and manipulation (i.e. reaching and grasping, pulling and pushing and holding etc.) capabilities. Whole-body control techniques [9, 10] have matured over the past few years enabling various humanoids to interact with their environment in a more robust manner. There is a steady transition of robot’s actions in predictable contacts to unpredictable ones. Forums like DARPA Robotics Challenge (DRC)1, RoboCup2 and other international robotic challenges [11] are providing a platform for innovative researches in this area. Nevertheless there is always room for further improvement especially in the out-of-routine challenging situations, multiple and diverse contacts etc. The concept of sight in robots is now possible due to various components like servo-motors, actuators, 2D or 3D cameras and embedded optical sensors. Computer vision techniques for object recognition, human gestures, gaze and speaker tracking and collision or obstacle avoidance mimics sense of sight for the humanoids [12]. Distant communication with a robot using voice in an unconstrained environment is a highly challenging task. Methods to improve the auditory and speech recognition of a robot are being given much attention by the researchers [13, 14, 15].

2.2. Speech and linguistics

Speech is the most effective and natural mode of communication and interaction. From the view point of social robots, not only the robots need to be equipped with state-of-the-art automatic speech recognition (ASR) software [14], language models for interaction [16, 17] are also required to make semantic sense of what is being communicated to the robot. While ASR has its own challenges (noisy environments, multiple individuals talking, etc.), natural language processing has emerged as an important component of social robots which are expected to converse rather than simply accept keywords as commands. Such language models are important for the robot not only to understand what is being spoken but also to respond. Lee et al. [18] investigated the speech and language technologies involved in educational social robots and studied their impact in language learning. Brick and Scheutz [19] argue that robots must carry out their language processing incrementally with the ability to comprehend the context in order to meet the expectations of humans. Authors propose an interesting interaction engine (RISE) which incrementally processes the syntactic and semantic information. User modeling for effective natural language processing in long term human-robot interactions has also been proposed [20].

2.3. Cognition and perception

As we try to make machines that look and behave like people, we need to equip them with perceptual abilities similar to that of humans. As user expectations exceed, a robot’s perception must go beyond basic functionalities (e.g. localization, navigation or obstacle avoidance etc.). A key mechanism to achieve this is user modeling. Comparative studies of humans and robots can lead to new approaches [21, 22, 23]. The classical approach is to deliberately abstract computational instructions from physical realization of a human’s cognitive system. Such robots that can perceive, infer and learn to mimic human behaviors are called cognitive robots. Intelligence, in a cognitive robot is the ability to transform sensed information into behavior. Human beings exhibit multitude of communicative signals while interacting. For a successful social interaction, a socially interactive robot should recognize the interaction roles, verbal and non-verbal cues and situation; thus exhibit a considerable degree of “social intelligence”.

Speech signals contain information about who is saying, what is being said and how it is being said. Context, tone, pitch and loudness all combine to convey information. Research regarding speech understanding in robotics include works like [24, 25], etc. In addition to vocalization, facial expressions, also give an insight into the intent of the social agent. Detection of human face and recognition of facial expressions is being incorporated in a socially believable robot. Cognitive empathy [26] is the phenomenon which models perception of emotions in robots. Gaze tracking [27] is another important aspect of perceiving the intentions of people while interaction, as it can indicate the focus of their attention. However gaze tracking involves detection of both face and eye orientation. Work is being carried out in this area but there are still numerous challenges that need to be addressed. Gestures and activity recognition [28], is also a promising area of research that can contribute to designing a socially intelligent robot.

2.4. Emotions and personality

Emotions play a significant role in human interaction; thus it was inevitable to induce emotions in socially interactive robots. The use of artificial emotions in social robots helps enhance believability and provides feedback to the users regarding the internal state of the robot, its goals and intentions. Artificial emotions [29], can also act as a control mechanism to understand robots perception of its surroundings. Numerous architectures have been proposed for introduction of artificial emotions but the most popular ones are based on bio-inspired models that include ethology, structure and psychology [30]. As mentioned in the previous section, there are several ways in which a robot can be made to express its emotions. Robots are now equipped with LEDs, motors and actuators beneath a flexible artificial skin to mimic various primary and secondary emotions. Aside mechanical actuation, computer graphics and animation techniques can also be applied to project emotions. Aside from facial gestures, robots are being designed to display emotions by other non-verbal cues like sound tone and pitch and body movements. The main purpose of expressing emotions is to convey readable signals to human for providing feedback and giving insight about robots intended plan of action.

Psychology defines personality as distinctive traits that distinguish an individual [31]. It is mainly the observers who define a person’s personality. In terms of robots, five types of personalities are usually considered, i.e. Tool like, Pet or Creature, Cartoon, Artificial being and Human-like based on its morphology and functionality. According to studies [32], personality of a robot can also be determined based on its ability to interact, express emotions and react in a given situation. Much has been done to make a believable human replica, however our biological and psychological complexities are still not fully discovered or understood, making it extremely difficult to project them into a robot.

Advertisement

3. Human robot interaction

Human robot interaction (HRI) is an emerging research area, originating from the fast increasing integration of robots in our daily lives. In contrast to conventional human computer interaction models which usually involve interaction between users and a passive machine, human robot interaction is influenced by several factors. Researchers have tried to categorize HRI approaches. Goodrich and Schultz [33], divided HRI into two broad categories i.e. remote interaction and proximate interaction, based on the proximity level of both human and robot during interaction. According to Sheridan [34], HRI can be divided based on nature of application, i.e. tele-robots, tele-operators and social robots. HRI model for tele-robots mainly consists of human supervisory control of robots in performance of routine tasks. Such robots have limited capability of automation and rely on commands of their human supervisor. Tele-robots are mostly used in assembly lines, packaging, mail sorting, offices, and hospitals. They are capable of sensing their environment and reporting back to a human operator. HRI model for tele-operators involves remote control of robots in challenged environments like space, air, terrestrial, and under water for non-routine tasks. Both of these interaction models are basically master–slave in nature. Interaction with social robots is different from that of tele-robots and tele-operators mainly because it perceives robots not just as slaves but as peer-to-peer collaborators.

3.1. Human robot social interaction

According to Dautenhahn [35], human robot social interaction approaches can be divided into three general categories, i.e. robot-centered approaches, human-centered approaches and robot cognition-centered approaches. In robot-centered HRI model, social robots are pre-programmed to interact with humans. Sociable robots are usually designed based on such approaches. They proactively engage people in a social manner and the interaction is designed to be mutually beneficial for both participants i.e. humans can be motivated to perform a specific task (e.g. for therapeutic purposes etc.) whereas robots can use the conversation for learning purposes. On the other hand, socially evocative robots are designed to interact with humans based on human-centered perspective. Anthropomorphism plays a key role in such kind of interaction. In a way, human participant attributes social responsiveness to the robot participant. Reasoning and consequently learning capabilities of the robot are not central objective in this HRI model. Socially interactive robots have instigated another HRI approach which is centered on robot cognition. These robots aim to intelligently interact with their human counterparts. Nevertheless such type of HRI models are greatly influenced by various factors and mainly require deep modeling of human cognition.

3.1.1. Factors influencing human robot social interaction

Significant efforts are being made to model HRI with the objective to inculcate social intelligence in robots. Some suggest modeling of human behaviors and cognition as a sequence of instructions which are pre-programmed into the robots while the other approach is to imitate human behaviors and learn from interactions. Irrespective of the approach selected for designing a social HRI model, several common factors play vital role in shaping it and thus should be given due consideration.

3.1.1.1. Robot autonomy

According to Beer et al. [36], HRI is greatly influenced by levels of robot autonomy (LORA). From tele-operators to humanoids, LORA influences the way in which humans and robots interact. Hence in order to model HRI, we must first identify the variables that influence and are influenced by robot autonomy.

3.1.1.2. Robot intelligence and cognitive ability

A robot’s intelligence and learning capabilities are important considerations as they influence what tasks a robots performs and how it performs them. A robots learning process may require a number of interactions with its human counterpart. Robots with high intelligence require lesser frequency of interactions than those with comparatively lesser intelligence.

3.1.1.3. Proxemics

Distance and orientation in social encounters between humans and robots is an important aspect [37]. A robot in close proximity may be able to hear human voice and detect facial expression clearly but might not be able to detect human gestures due to limitation in vision. On the other hand, a robot at a distance may detect full body gestures but is unable to carry out facial expressions and speech recognition.

3.1.1.4. Social and situation awareness

An important aspect of day-to-day interaction is the ability to perceive and abstract information from the environment. This phenomenon is termed as situation awareness and it helps in decision making, planning and responding accordingly while interaction. By use of various sensors a robot can be designed to sense its surroundings or perceive emotional condition of its interacting partner. Based on this information it can create a goal oriented understanding of its environment and finally respond either based on past experience, mimicry or adaptation. Nevertheless it is not surprising that human robot interactions might fail when at times even human-human interactions do. Giuliani et al. [38], described two types of failures in HRI, i.e. social norm violations and technical failures. Any deviation from the social script or the usage of the wrong social signals (i.e. correct action execution but inappropriate for the given situation) due to incorrect judgment of the robot is usually considered as social norm violation. On the other hand if a robot judges the situation correctly and selects the appropriate action but the action is poorly executed then this is termed as technical failure.

3.1.1.5. Verbal and non-verbal communication

Interaction between two or more participants is usually termed as a dialog. Exchange of information is the prime objective of a dialog. When humans engage in a dialog, they usually rely on a variety of para-linguistic social cues (i.e. facial expressions and gestures, etc.) in addition to words. Research [39], has proven such non-verbal cues to be highly effective for controlling human robot dialog. However robot’s inability to fully interpret speech signals (e.g. pitch and tone etc.) alone, for complete comprehension of human emotions during an interaction can cause interaction failure. Gestures, facial expression and body movements add extra clues for the robot to understand the mental state of the participant and respond accordingly.

3.1.1.6. Cognitive or affective empathy

Empathy plays a vital role in interactions among people. It must therefore be an important consideration in the case of social robots and their interactions with humans [40]. It covers both a robot’s capacity to understand human mental state and its ability to respond to that state appropriately.

3.1.1.7. Social influence and roboethics

User acceptance is the most important element in the success of any technology. In case of social robots, demographics, psychology and comfort of the human participant must be kept in mind while designing the HRI model. Another issue in human interactions is their being abided by certain rules called “social norms”. These social attitudes approve or disapprove social interactions. A violation of social standards is considered a failure of interaction in both cases of human-human and human-robot interaction [41]. Roboethics is a field which incorporates various aspects of communication and social sciences to chalk out norms of human robot interaction. Keeping these ethics in view, while designing a social HRI model is vital for its acceptance.

3.1.2. Assessment and evaluation methodologies

As social HRI is gaining attention of the research community, a growing need is occurring for strong and efficient methods of its assessment and evaluation. Currently most of the assessment and evaluation criterion used in HRI are adapted from HCI either per se or with slight modifications. According to Beer et al. [36], assessment methodologies for HRI can be commonly characterized as process-oriented, diagnostic approach, ongoing and continuous. Similarly the evaluation methodologies include product-oriented, judgmental approach, final and discrete evaluations. Once again the factors that model HRI also decide which assessing methodologies are most suitable for it. Assessments can however be carried out in combination as well. Beer et al. [36] grouped existing assessment methodologies into three basic groups i.e. Social models which mainly involve assessment of emulation of empathy during HRI; technology acceptance model (TAM) and similar methodologies which represent user acceptance; behavioral adaptation model. Both the assessment and evaluation methodologies can be objective (e.g. task success, dialog quality and dialog efficiency etc.) or subjective (UTAUT model, Godspeed questionnaire etc.). Existing evaluation methodologies on HRI can also be divided as primary and non-primary based on how (i.e. directly or indirectly) they evaluate the HRI model. Popular primary evaluation methodologies used for human studies HRI include methods like self-assessments and subjective evaluations, behavioral measurements, psycho-physiological measures and task performance metrics. Strengths and weaknesses of these methods are summarized in Table 1.

Evaluation methodologiesDescriptionStrengthsWeaknesses
Self-assessments and subjective evaluationIncludes psychometric measures, questionnaires, and/or surveys for personal assessment of participants in response to the robot and interaction.Easily implemented, easily quantified.Possibility of inaccuracy due to mental state and interpretation capabilities of the human participant; oriented towards engineering and leaves out social interaction perspective.
Behavioral measurementsIncludes observation of human participants behavior while interacting with the agent.Can be implemented in combination with other methodologies, e.g. self-assessment and subjective evaluation and psycho-behavioral measures.Can be biased due to “Hawthorne effect”.
Psycho-physiological measuresObservation of user behavior towards the agent repeatedly over a period of time.Objective hence less biased; non-invasively measures stress and response of participants; video based reduces Hawthorne effect.Can lead to misinterpretations due to complexity; requires prior knowledge of human participants; time consuming due to longitudinal in nature.
Task performance metricsInvolves more than one person or one robot; pre-set variables in the selection criteria for task performance.Good for team scoring; less biased; good for evaluating HRI involving humanoids; good for HRI involving non-verbal behaviors in addition to verbal ones.Not suitable for one-to-one HRI; less flexible method; not generalized enough for every HRI model; not suitable for robots other than humanoids or HRI which involves mainly verbal behaviors and not non-verbal cues; limited application areas.

Table 1.

Existing evaluation methodologies for HRI.

Efforts have been done to outline some secondary methodologies like ease of classification, passive-social medium, numerical analysis of body movements and proximity theories for improved evaluation of HRI. Nevertheless due to the complexity of human robot social interaction researchers suggests the use of combination of more than one of the existing methodologies till empirical research can be mapped in theoretical concepts.

Advertisement

4. Application areas

In addition to research purposes, social robots find applications in a variety of problem areas including education, health care and entertainment.

4.1. Social robots in research

While a number of social robots have been designed to serve as test beds to evaluate the technological advancements in the design of social robots, a number of robots have been used as test subjects to replace humans. Social robots present an attractive alternative to humans to serve as test subjects in a number of experimental settings especially those involving risks, privacy or ethical issues [42, 43]. Likewise, operations which are difficult or controversial to carry out on humans can be performed on social robots. Not only human biasness can be avoided but evaluations can be repeatedly performed under identical conditions. Such social robots can serve as subjects to evaluate social interactions and study their influence on cognition. Among one of the early contributions, Kismet, a robot head designed by Breazeal [44] in the late 90s for affective computing, has been employed to study caregiver behavior among infants. Likewise, infanoid is an infant-like humanoid robot that has been used to study social development in the children. Its abilities to detect humans and objects, extract emotions of the interacting partner and imitate human voice allows its usage in investigating the development of social learning skills. Similarly, Cog, a well-known humanoid robot has been employed to evaluate the behavioral and learning models. Another widely used humanoid robot test bed is iCub that articulates a 3.5 years old child and has been designed to support research in cognitive functioning and artificial intelligence.

4.2. Entertainment

Entertainment robots (autonomous or remote controlled) include toys, pets, companions, cars and drones etc. While the interactions with robots like cars or drones are not expected to be humanoid, toys and pets (companions) are expected to interpret and behave as close as possible to their real world equivalents. Such robots directly target the consumers and hence cost is the most important parameter in designing entertainment robots. Optimizing the cost in the competitive robot market, in some cases, may result in comprising on the technology. Among popular toy robots is ‘My Real Baby’ developed by iRobot in partnership with a toy manufacturer. The robot is an expressive and responsive toy doll which can smile, laugh and imitate infant sounds. While such animated toy robots require their human masters to look after them, pet or companion robots are more autonomous. Mobility is one of the prime concerns in such companion (quadruped and wheeled) robots. Among these, AIBO [45] by Sony is one of the most advanced pet robots in the market that imitates the relationship between man and a pet dog. First introduced in 1999 in the market, different models were produced till 2005. One of the most sophisticated consumer robots, AIBO could respond to over 100 voice commands, learn to walk and play with a ball. Other similar pet robots include Poo-Chi, Pleo, iDog, Genibo and FIDO. In addition to entertainment, companion robots have also been designed for military and research purposes. Examples include Rhex, Canid, Cheetah, SCARAB, Rise and Titan. Figure 4 shows some of the popular zoomorphic robots.

Figure 4.

Zoomorphic robots: (a) AIBO, (b) iDog, (c) Cheetah.

4.3. Healthcare

While surgical robots have been serving the medical sector for a long time, the relatively recent idea of employing social robots in the health sector has also been widely employed [46] for application like rehabilitation, elderly assistance and therapy etc. (Figure 5a). Researchers have investigated the possibilities of employing robots to educate and enhance the communication skill of children with disabilities. Likewise, social robots have also been useful as coaches for physical exercises and following diet plans. A well-known example of a coaching robot is Autom [47] that is designed to be a weight loss coach. Another similar robot, iRobiQ, monitors hypertension, manages medication and issues reminders. A wide variety of assistive social robots for elderly care have also been designed ranging from robotic wheelchairs to companion robots attempting to compensate the loss of a family member. Experiments with Paro, the therapeutic robot, revealed that the presence of a social robot in an old home increased the number of interactions among the elderly residents.

Figure 5.

Social robots in: (a) health care, (b) service, (c) education.

4.4. Service

Service robots are designed to assist human beings in doing everyday tasks including household chores (Figure 5b). Examples of these robots include PatrolBot for delivery, security, monitoring and guidance, Gita for cargo carrying, Roomba that serves as a vacuum cleaner, Sanbot that provide passenger services at airport and many more. Likewise, social robots like Rhino and Mobot have been designed to serve as guides for tourists. Severinson-Eklundh et al. [48] discuss the interaction models between humans and service robots using Cero as an example. The authors conclude that for satisfactory interactive sessions, the design considerations, in addition to the primary user of the service robot, should also take into account the group of people in the environment where the robot is intended to provide its services. Likewise, authors in [49] discuss the design issues interaction between humans and domestic robots using Roomba vacuum cleaner as a case study. The authors investigate the possibilities of smoothly ‘fitting’ such service robots in the home environment. A multi-modal design based on vision and speech is proposed in [50]. Though the models are discussed with service robots as applications, the authors claim that the proposed interaction cycle can be applied to general man-machine interaction scenarios as well.

4.5. Education

Beyond health care and services, robots have increasingly been used in the education sector as well. While introduction of robots in class room teaching makes the lectures interesting in the elementary schools, robots have been effectively employed in the higher educational institutes as well. Students of medicine, for instance, can perform complicated medical procedures on humanoid robots. Likewise, engineering students can use robots in complex or dangerous experimental or real world scenarios. One such popular educational robot is NAO (Figure 5c) developed by SoftBank Robotics. In addition to general education, NAO robots have also been employed to interact with autistic children. Robots can also serve as proxies both for students and teachers in case they are not able to attend the classes. A well-known series of such education robots has been designed by VGoRobotics. A key concern in using robots as teachers is the replacement of interpersonal relationships. Such robots also need to detect and adapt to the social mood of the environment they are deployed in. Some researchers argue that robots at elementary schools must change their behavior as a function of the activities of the children. A comprehensive review of the applicability of robots in education can be found in [51].

Advertisement

5. Challenges

Despite the emergent technological solutions at hand and conceptualizations regarding acceptability, there are considerable challenges to be addressed before social believability in robots can be considered a success. Literature [52, 53] suggests that the integration of social robots in human society poses both social and technical problems.

5.1. Complexity of social situations and ethics

In contrast to their successors, the industrial robots that are designed to carry out routine tasks in controlled environments and the field robots that work in places beyond human reach, social robots are expected to operate in a highly unpredictable and diverse habitat with its habitants that share the same traits. According to Salter et al. [54], real-world environments can prove to be both beneficial as well as challenging test grounds for assessing the capabilities of a robotic device. A gap still exists between the performance of an intelligent agent in a controlled environment and that in a real-world scenario. Limitations in replication of most human robot interaction (HRI) scenarios greatly attribute in average adaptation of social robots in real-world situations. Empirical studies like [55], which investigate robots’ acceptability and usability, explain the complexity of social situations and dimensions of HRI beyond the domestic vacuum cleaning robots. The capacity of a social robot to contextually understand the behaviors of the real world, its response to subjective experiences and user feedback are actual performance parameters rather than technical capabilities alone. Despite its significance, context awareness in the design and development of social robots is still in its infancy.

Another limiting factor the integration of social robots in society is ethics. Interaction in social groups and relationship of a single individual with a machine is influenced by a variety of meta-principles and paradigms; thus making roboethics a challenging task. Diversity of cultures and religions make modeling of sensitive issues like human dignity and integrity, respect and family, privacy and protection, a complex task. Preservation of common principles of humanity and human rights in occasions which involve robotic intervention must be assessed keeping ethical sensitivity in view.

5.2. Hardware limitations vs. human expectations

The ultimate goal of anthropomorphism is to replicate a human being. Nevertheless our pursuit of making realistic humanoids might experience Mori’s “Uncanny valley effect” at some point (Figure 6). Human expectations increase with a sophistication in design of a humanoid. This can attribute to people’s rejection of anthropomorphic robots and other intelligent agents. On the other hand, recent studies like [56] suggest that shortcomings like mismatch between appearance and movement or voice can also create an uncanny or eerie feeling.

Figure 6.

Mori’s Uncanny Valley theory [8].

Despite great progression in the synthetic industry and mechatronics, we are still decades away from providing richer support for speech, gestures or expressions to machines. A look at the latest generation of humanoids reveals the gap between reality and fiction. The expressive behavior of robotic faces is still not life-like due to limitations of mechatronic design and control. Even for the most sophisticated generation of robots, displaying emotions reflects a certain degree of artificiality.

A robot’s body is a mechanical structure composed of several rigid parts, connected to each other by joints. Currently each active joint has a restricted range of motion generated by actuators. Due to the complexity of design, manufacturing cost and mechanical dynamics, even the latest line of humanoids can imitate only basic tasks in a coordinated manner.

In contrast to conventional interactive systems, an interactive social robot must take its physical environment into consideration while communicating with users. Most real-world environments are unstructured, dynamic and noisy, making it challenging for robots. Although synthesized voice quality has improved over a period of time nevertheless communication between a human and a humanoid is still constrained by several factors like speech localization, language understanding, dialog management and non-verbal cues like gaze tracking etc.

5.3. Standard models and comparability issues

An essential prerequisite to designing an intelligent system is to outline its functional requirements. Same holds true for the field of cognitive robotics. Nevertheless, the fact that cognitive science, as a discipline, is yet to establish normative models itself that can be realized in well-engineered systems, makes it difficult to give robots a capacity for cognition [57]. Research in cognitive architectures for biologically inspired agents suffers from a significant void. This has resulted in modeling and trial of such agents in a controlled environment with most demonstrated results as mere proof-of-concept. Lack of relevant HRI models is another issue limiting the interaction capacity of a socially believable robot. The field of HRI incorporates contributions from both engineering sciences (communications, computer science and engineering) and human sciences (psychology and sociology). Due to its multidisciplinary nature it is difficult to generalize a standard HRI model. This is the reason that currently most HRI models are inspired by conventional HCI models. However there is a particular need for a dedicated social human robot interaction model as human interaction with social robots differs significantly from interaction with traditional passive computer based systems or agents.

Need for a comparison criterion is equally significant as the existence of benchmark architecture in the field of social robotics. Nevertheless it is not an easy task considering the dimensions of the test environment and diversity of outcome expectations. According to Bartneck et al. [58], “quick and dirty” methods adopted by most robot developers, result in questionable success of targeted goals. Recent studies like [59], suggest introduction of “Human in the loop” approach. Application and modification of User Experience Design (UXD) evaluation techniques in addition to relevant criteria of evaluation in HCI must be considered for designing performance comparability metrics suitable for HRI. However research in this area is still in its infancy.

Advertisement

6. Conclusion

As we progress, the reality of socially believable robots in our daily lives is becoming more vivid. The relationship between humans and robots has crept beyond Master–Slave but instead has become that of peers. Social robots are already assisting us in health care, education and entertainment. They are serving as our tour guides and office assistants. Soon they will be our companions in our homes. Nevertheless our optimism can dampen if we are unable to overcome the challenges and limitations, we face today. It is evident that technological advancement alone cannot contribute fully without complete understanding of humans and society. Efforts must be taken to reduce the complexity of human psychology and society in order to model effective human robot social interactions.

In order to achieve success, human in the loop concept must be incorporated as frequently as possible. Defining roles and rules might make it easier for a social robot to comprehend its surroundings and respond appropriately. Furthermore a socially interactive robot requires frequent interactions with a wide range of users: different genders, different cultural and social backgrounds, different ages, etc. for it to understand the needs and dimensions of various social situations. In many current applications and experimentations, social robots engage only in short-term interactions with their human counterparts and thus treat all humans in the same manner. This usually results in a failure in HRI as perceived by its users. As robot designers and engineers tackle with issues like cost effectiveness, user acceptance and social awareness, mass integration of these mechanical companions in our everyday life might take a while.

References

  1. 1. Fong T, Nourbakhsh I, Dautenhahn K. A survey of socially interactive robots. Robotics and Autonomous Systems. 2003;42:143-166
  2. 2. Dautenhahn K, Billard A. Bringing up robots or—The psychology of socially intelligent robots: From theory to implementation. In: Proceedings of the Third Annual Conference on Autonomous Agents; ACM; 1999. pp. 366-367
  3. 3. Lazzeri N, Mazzei D, Zaraki A, De Rossi D. Towards a believable social robot. In: Conference on Biomimetic and Biohybrid Systems; Spring; 2013. pp. 393-395
  4. 4. Breazeal C. Toward sociable robots. Robotics and Autonomous Systems. 2003;42(3):167-175
  5. 5. Fink J. In: Ge SS, Khatib O, Cabibihan J-J, Simmons, editors. Anthropomorphism and Human Likeness in the Design of Robots and Human-Robot. Berlin, Heidelberg: Springer Berlin Heidelberg; 2012. pp. 199-208. DOI:10.1007/978-3-642-34103-8_20
  6. 6. Mori M. The uncanny valley. Energy. 1970;7(4):33-35
  7. 7. Introrobotics. 3 different humanoid robot head designs to generate facial expressions [Internet]. February 27, 2015. Available from: https://www.intorobotics.com/3-different-humanoid-robot-head-designs-to-generate-facial-expressions/
  8. 8. Cubber GD, Doroftei D, Rudin K, Berns K, Matos A, Serrano D, Silva E. Introduction to the Use of Robotic Tools for Search and Rescue. In Search and Rescue Robotics-From Theory to Practice. InTech; 2017. DOI: 10.5772/intechopen.69489
  9. 9. Balderas D, Rojas M. Human Movement Control. In Automation and Control Trends. InTech; 2016. DOI: 10.5772/intechopen.63720
  10. 10. Dai H, Valenzuela A, Tedrake R. Whole-body motion planning with centroidal dynamics and full kinematics. In: 2014 14th IEEE-RAS International Conference on Humanoid Robots (Humanoids); 2014. pp. 295-302
  11. 11. Fontana G, Matteucci M, Amigoni F, Schiaffonati V, Bonarini A, Lima PU. RoCKIn Benchmarking and Scoring System. In RoCKIn-Benchmarking Through Robot Competitions. InTech; 2017. DOI: 10.5772/intechopen.70013
  12. 12. Kılıç V, Wang W. Audio-visual speaker tracking. Motion Tracking and Gesture Recognition. InTech; 2017. DOI: 10.5772/intechopen.68146
  13. 13. Ishi CT, Matsuda S, Kanda T, Jitsuhiro T, Ishiguro H, Nakamura S, Hagita N. Robust speech recognition system for communication robots in real environments. In: 2006 6th IEEE-RAS International Conference on Humanoid Robots; IEEE; 2006. pp. 340-345
  14. 14. Wang N, Broz F, Di Nuovo A, Belpaeme T, Cangelosi A. A user-centric design of service robots speech interface for the elderly. In: Recent Advances in Nonlinear Speech Processing. Springer International Publishing; 2016. pp. 275-283
  15. 15. Noda K, Yamaguchi Y, Nakadai K, Okuno HG, Ogata T. Audio-visual speech recognition using deep learning. Applied Intelligence. 2015;42(4):722-737
  16. 16. Lauria S, Bugmann G, Kyriacou T, Bos J, Klein A. Training personal robots using natural language instruction. IEEE Intelligent Systems. 2001;16(5):38-45
  17. 17. Kaigorodova L, Rusetski K, Nikalaenka K, Hetsevich Y, Gerasuto S, Prakapovich R, Sychou U, Lysy S. Language modeling for robots-human interaction. In: International NooJ Conference; Spring; 2015. pp. 162–171
  18. 18. Lee S, Kim C, Lee J, Noh H, Lee K, Lee GG. Affective effects of speech-enabled robots for language learning. In: Spoken Language Technology Workshop (SLT), 2010 IEEE; 2010. pp. 145-150
  19. 19. Brick T, Scheutz M. Incremental natural language processing for HRI. In: Human-Robot Interaction (HRI), 2007 2nd ACM/IEEE International Conference on; IEEE; 2007. pp. 263-270
  20. 20. Hameed IA. Using natural language processing (NLP) for designing socially intelligent robots. In: 2016 Joint IEEE International Conference on Development and Learning and Epigenetic Robotics (ICDL-EpiRob); IEEE; 2016. pp. 268-269
  21. 21. Karapinar S, Sariel S. Cognitive robots learning failure contexts through real-world experimentation. Autonomous Robots. 2015;39(4):469-485
  22. 22. Borghi AM, Cangelosi A. Action and language integration: From humans to cognitive robots. Topics in Cognitive Science. 2014;6(3):344-358
  23. 23. Belpaeme T, Adams S, de Greeff J, di Nuovo A, Morse A, Cangelosi A. Social development of artificial cognition. In: Toward Robotic Socially Believable Behaving Systems-Volume I. Spring; 2016. pp. 53-72
  24. 24. Noda K, Yamaguchi Y, Nakadai K, Okuno HG, Ogata T. Audio-visual speech recognition using deep learning. Applied Intelligence. 2015;42(4):722-737
  25. 25. Ding Ing Jr, Shi J-Y. Kinect microphone array-based speech and speaker recognition for the exhibition control of humanoid robots. Computers & Electrical Engineering. 2017;62:719-729
  26. 26. Tisseron S, Tordo F, Baddoura R. Testing empathy with robots: A model in four dimensions and sixteen items. International Journal of Social Robotics. 2015;7(1):97-102
  27. 27. Palinko O, Rea F, Sandini G, Sciutti A. Eye gaze tracking for a humanoid robot. In: 2015 IEEE-RAS 15th International Conference on Humanoid Robots (Humanoids); IEEE; 2015. pp. 318-324
  28. 28. Rautaray SS, Agrawal A. Vision based hand gesture recognition for human computer interaction: A survey. Artificial Intelligence Review. 2015;43(1):1-54
  29. 29. Novikova J, Watts L. Towards artificial emotions to assist social coordination in hri. International Journal of Social Robotics. 2015;7(1):77-88
  30. 30. Wehle, Marko and Weidemann, Alexandra, Boblan IW. Research on human cognition for biologically inspired developments: Human-robot interaction by biomimetic AI. In: Advanced Research on Biologically Inspired Cognitive Architectures; IGI Global; 2017. pp. 83-116
  31. 31. Boyce CJ, Wood AM, Powdthavee N. Is personality fixed? Personality changes as much as “variable” economic factors and more strongly predicts changes to life satisfaction. Social Indicators Research 2013;111(1):287-305
  32. 32. Lee KM, Peng W, Jin S-A, Yan C. Can robots manifest personality? An empirical test of personality recognition, social responses, and social presence in human-robot interaction. Journal of Communication. 2006;56(4):754-772
  33. 33. Goodrich MA, Schultz AC. Human-robot interaction: A survey. Foundations and Trends in Human-Computer Interaction. 2007;1(3):203-275
  34. 34. Sheridan TB. Human-robot interaction: Status and challenges. Human Factors. 2016;58(4):525-532
  35. 35. Dautenhahn K. Socially intelligent robots: Dimensions of human-robot interaction. Philosophical Transactions of the Royal Society of London B: Biological Sciences. 2007;362(1480):679-704
  36. 36. Beer J, Fisk AD, Rogers WA. Toward a framework for levels of robot autonomy in human-robot interaction. Journal of Human-Robot Interaction. 2014;3(2):74
  37. 37. Mead R, Matarić MJ. Proxemics and performance: Subjective human evaluations of autonomous sociable robot distance and social signal understanding. In: 2015 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS); IEEE; 2015. pp. 5984-5991
  38. 38. Giuliani M et al. Systematic analysis of video data from different human-robot interaction studies: A categorization of social signals during error situations. In: Frontiers in psychology. 2015;6. ISSN: 1664-1078
  39. 39. Mavridis N. A review of verbal and non-verbal human-robot interactive communication. Robotics and Autonomous Systems. 2015;63:22-35
  40. 40. Leite I, Pereira A, Mascarenhas S, Martinho C, Prada R, Paiva A. The influence of empathy in human-robot relations. International Journal of Human-Computer Studies. 2013;71(3):250-260
  41. 41. Thellman S, Ziemke T. Social attitudes toward robots are easily manipulated. In: Proceedings of the Companion of the 2017 ACM/IEEE International Conference on Human-Robot Interaction; ACM; 2017. pp. 299-300
  42. 42. Hayashi K, Sakamoto D, Kanda T, Shiomi M, Koizumi S, Ishiguro H, Ogasawara T, Hagita N. Humanoid robots as a passive-social medium—A field experiment at a train station. In: 2007 2nd ACM/IEEE International Conference on Human-Robot Interaction (HRI); 2007. pp. 137-144
  43. 43. Scassellati B. Investigating Models of Social Development Using a Humanoid Robot. Biorobotics. CiteSeer; 2000
  44. 44. Breazeal C, Scassellati B. Infant-like social interactions between a robot and a human caregiver. Adaptive Behavior. 2000;8(1):49-74
  45. 45. Steels L, Kaplan F. AIBO’s first words: The social learning of language and meaning. Evolution of Communication. 2000;4(1):3-32
  46. 46. Lu EC, Wang RH, Hebert D, Boger J, Galea MP, Mihailidis A. The development of an upper limb stroke rehabilitation robot: Identification of clinical practices and design requirements through a survey of therapists. Disability and Rehabilitation: Assistive Technology. 2011;6(5):420-431
  47. 47. Ricks DJ, Colton MB. Trends and considerations in robot-assisted autism therapy. In: 2010 IEEE International Conference on Robotics and Automation (ICRA). 2010; pp. 4354-4359
  48. 48. Severinson-Eklundh K, Green A, Hüttenrauch H. Social and collaborative aspects of interaction with a service robot. Robotics and Autonomous Systems. 2003;42(3):223-234
  49. 49. Forlizzi J, DiSalvo Carl. Service robots in the domestic environment: a study of the roomba vacuum in the home. In: Proceedings of the 1st ACM SIGCHI/SIGART Conference on Human-Robot Interaction; ACM; 2006. pp. 258-265
  50. 50. Ido J, Matsumoto Y, Ogasawara T, Nisimura R. Humanoid with interaction ability using vision and speech information. In: 2006 IEEE/RSJ International Conference on Intelligent Robots and Systems; IEEE; 2006. pp. 1316-1321
  51. 51. Mubin O, Stevens CJ, Shahid S, Al Mahmud A, Dong J-J. A review of the applicability of robots in education. Journal of Technology in Education and Learning. 2013;1:209-0015
  52. 52. Tapus A, Maja M, Scassellatti B. The grand challenges in socially assistive robotics. IEEE Robotics and Automation Magazine. 2007;14(1):35-42
  53. 53. Salem M, Lakatos G, Amirabdollahian F, Dautenhahn K. Towards safe and trustworthy social robots: Ethical challenges and practical issues. In: Tapus A, André E, Martin JC, Ferland F, Ammi M, editors. Social Robotics. Lecture Notes in Computer Science. Springer, Cham. ICSR. 2015;9388:584-593
  54. 54. Salter T, Michaud F, Larouche H. How wild is wild? A taxonomy to characterize the ‘wildness’ of child-robot interaction. International Journal of Social Robotics. 2010;2(4):405-415
  55. 55. Sung JY, Grinter RE, Christensen HI. Domestic robot ecology. International Journal of Social Robotics. 2010;2(4):417-429
  56. 56. Mitchell WJ, Szerszen KA Sr, Lu AS, Schermerhorn PW, Scheutz M, KF MD. A mismatch in the human realism of face and voice produces an uncanny valley. I-Perception. 2011;2(1):10-12
  57. 57. Lieto A. Representational limits in cognitive architectures. Cognitive Robot Architectures. 2017;16
  58. 58. Bartneck C, Kulić D, Croft E, Zoghbi S. Measurement instruments for the anthropomorphism, animacy, likeability, perceived intelligence, and perceived safety of robots. International Journal of Social Robotics. 2009;1(1):71-81
  59. 59. Alenljung B, Lindblom J, Andreasson R, Ziemke T. User experience in social human-robot interaction. International Journal of Ambient Computing and Intelligence (IJACI). 2017;8(2):12-31

Notes

  • https://en.wikipedia.org/wiki/DARPA_Robotics_Challenge
  • www.romela.org/robocup/

Written By

Momina Moetesum and Imran Siddiqi

Submitted: 14 March 2017 Reviewed: 29 September 2017 Published: 04 July 2018