Open access peer-reviewed chapter

Issues in the Development of Conversation Dialog for Humanoid Nursing Partner Robots in Long-Term Care

Written By

Tetsuya Tanioka, Feni Betriana, Ryuichi Tanioka, Yuki Oobayashi, Kazuyuki Matsumoto, Yoshihiro Kai, Misao Miyagawa and Rozzano Locsin

Submitted: 05 January 2021 Reviewed: 25 June 2021 Published: 03 August 2021

DOI: 10.5772/intechopen.99062

Chapter metrics overview

364 Chapter Downloads

View Full Metrics

Abstract

The purpose of this chapter is to explore the issues of development of conversational dialog of robots for nursing, especially for long-term care, and to forecast humanoid nursing partner robots (HNRs) introduced into clinical practice. In order to satisfy the required performance of HNRs, it is important that anthropomorphic robots act with high-quality conversational dialogic functions. As for its hardware, by allowing independent range of action and degree of freedom, the burden of quality exerted in human-robot communication is reduced, thereby unburdening nurses and professional caregivers. Furthermore, it is critical to develop a friendlier type of robot by equipping it with non-verbal emotive expressions that older people can perceive. If these functions are conjoined, anthropomorphic intelligent robots will serve as possible instructors, particularly for rehabilitation and recreation activities of older people. In this way, more than ever before, the HNRs will play an active role in healthcare and in the welfare fields.

Keywords

  • issues
  • conversation
  • dialog
  • humanoid nursing partner robots (HNRs)
  • long-term care

1. Introduction

The issue of healthcare demands for the increasing older adult population in Japan is a significant concern, and in other developed countries [1]. This concern is further affected by the decreasing number of healthcare workers who are also getting older, resulting in high turnover rates of healthcare workers [2, 3, 4]. In this situation, it is appropriate to consider the use of healthcare robots, which is increasingly recognized as the potential solution to meet care demands of older persons as well as of patients with mental illness [5].

“What are the prominent areas of concern to support older persons when using healthcare robots?” and “What are the barriers to introducing Humanoid Nursing partner Robots (HNRs) to hospitals or elderly institutions?” Similarly, another question may be, “Which of these types of robots are needed for nursing care, the anthropomorphic or non-anthropomorphic robots?”

Different technological requirements can dictate whether anthropomorphic or non-anthropomorphic robots are needed. For example, if the technological demand is for measuring blood pressure and body temperatures, an anthropomorphic machine may not be necessary. Currently, non-robotic technologies are detecting and retrieving this information with digital hand-held devices, not necessarily robots [6]. However, anthropomorphic robots may be necessary when a conversation is expected, particularly during a dialog with older persons while taking blood pressures and other vital signs, much like a human nurse does today. In addition, the following distinctive questions are asked, “What nursing care tasks can be programmed specifically for anthropomorphic nursing partner robots?” and “What are the core competencies that only nurses, and professional caregivers can do?” Reflecting on the aforementioned questions, it is essential to establish a field of Robot Nursing science, developed by nurse scientists from the perspective of a unique ontology of nursing, and designed from a foundation of robotics engineering, computer science, and nursing science. The expectation is to develop a knowledge base for robot nursing science as foundation for the practice of nursing that uniquely embraces the anthropomorphic robot realities, particularly in demanding precise conversational capabilities. These realizations were illuminated through the posited questions from which the answers may further the development of robot nursing science and its practice.

The aim of this chapter is to explore the issues concerning the development of dialog robots for nursing, especially for long-term care, and the prospects for introducing HNRs into nursing practice.

Advertisement

2. Development of robots that can compassionate conversations

2.1 Concerns regarding human-robot conversation capabilities

As an issue for humanoid robot verbalization, the robot voice should have an appropriate intonation, the speech speed, and the voice range that is easy for older persons to hear [7]. If a cute-looking robot utters a low-pitched voice similar to that of an adult male, the user may find it creepy [8]. Therefore, it is necessary to consider a humanoid robot with easy voices for older persons to hear and voices that have a sense of familiarity.

Challenges to developing robot-nursing science are realistic. These challenges highlight the necessity to promote research with the goal of systematizing technological competencies, ethical thinking, safety measures, and outcomes of using robots in nursing settings. With new devices and technologies developed by engineers, introduced and used in nursing care, robot nursing science can only develop within an ontology of nursing at its core. The growing reality of healthcare robot utility is perceived as nursing partners in practice. Human caring expressed as human-to-human relationships, and among nonhumans are the futuristic visioning of healthcare with humanoid robots as main protagonists.

2.2 Development of caring dialogical database of humanoid nursing partner robots and older persons

Full and effective use of robots by nurses and healthcare providers would lead to a better understanding of patients and their needs. Thus, it is necessary to develop a “Caring Dialog Database” for HNRs in order to enhance robot capabilities to know the patient/client, and to share the expressions of human-robot interactions in esthetic ways. Furthermore, it is important to develop a dialog pattern that allows humanoid robots to empathize with an older person [9]. The ability to empathize and to communicate accurate empathy is likely to enhance the older person’s feeling cared for through HNR actions such as: 1) Listening attentively and accepting of older persons; 2) Knowing older persons intentionally; and 3) Establishing appropriate caring dialog.

2.3 Robotics and artificial intelligence

Robotics and Artificial Intelligence (AI) will become a predominant aspect of healthcare and in welfare settings. Human caring was based on a human-to-human relationship. However, in a nonhuman-to-human relationship in the case of HNRs, it is essential to consider what is required in the aspects of ethical concerns and human safety. Regarding redefinitions of nursing and its underlying beliefs, values, and assumptions, it is pertinent to understand the implications of AI and its role in HNR in healthcare. Thus, robotics, AI, with Natural Language Processing (NLP) will become a predominant aspect of healthcare and welfare settings, particularly among older persons [10].

The HNRs must perform appropriate empathic dialogs [9, 11] by accurately judging the person’s expression, non-verbal, and language expressions using AI sensory tools [12].

For the emotional recognition and non-verbal output, the required functions include: 1) Recognizing users’ facial expressions; 2) Matching the expressions with emotion database information; 3) Selecting appropriate expressions from emotion database; and 4) Conveying emotional expression by particular motion, for example, using flashing light, moving upper limb and head, etc.

Furthermore, a robot’s recognition and verbal output for voices by other persons, e.g., older persons can include: 1) Subjects’ voice recognition; 2) Text conversion by NLP; 3) Matching with the NLP database for appropriate response; 4) Speech synthesis, and 5) Vocalization.

In conversations of robot with older person, it is expected that robot can provide accurate empathic response according to the situation. If an older person said, “I want to eat sushi!”, but the humanoid robot responds with, “I cannot eat because I am a robot”, this is unlikely to engage older persons with the robot because such response does not demonstrate the empathic understanding of the robot. However, if the humanoid robot responds like so: “I am a robot, but I would like to try eating sushi. Tell me, what does it taste like?”, this answer is likely to engage older persons because it relates a feeling of understanding and of empathy. If HNRs have this empathic response competency, older persons can attain well-being by understanding the content of dialogs and conversations with robots, such as Pepper robot (Figure 1).

Figure 1.

Pepper is interacting with older persons.

Advertisement

3. Required conversation functions for humanoid nursing partner robots

This section discusses requirements for HNRs to allow a two-way conversation (dialog) with the user/other. HNRs should comprehend the content of the remarks, the intention of these remarks, including emotions, and others. From the information on speech, paralanguage, and appearance, such as facial expressions and gestures, HNRs can present listening postures to the user, using an appropriate ‘line of sight’ and nods to signify the appropriateness of the response to the user’s remarks (Figure 2). These functionalities might facilitate active speech engagements of the user with the HNRs. Furthermore, when HNRs return appropriate responses based on the user’s remarks and the contents they understand from non-verbal information, the user can feel that HNRs are listening and understanding dialog/conversation and may feel satisfied with the information or content of the interactive dialog.

Figure 2.

An example of the required performance of the application for dialog/conversation with older persons (the robot’s receiver function).

As an example of an appropriate response by HNRs, a method such as repeating the keyword used by the user in the remark, or providing a topic related to the keyword can be considered. Further, the voice and movement of the HNRs during its response also affect the impression of the dialog (Figure 3).

Figure 3.

An example of the required performance of the application for dialog/conversation with older persons (the robot’s sender function).

For example, when a user speaks a sad topic or shows a sad expression, it is necessary that the HNR responds with a sentence and bodily behavior related to compassion, comfort, and encouragement. Moreover, it is important that the robot’s voice is conveyed with a tone of artificial compassion that matches the response sentence, and accurately delivers the humanoid robot’s intention to the user [13, 14]. The humanoid robot’s response matching the user’s emotions may contribute to an expression of artificial sympathy and thereby enhancing empathic expressions for the user.

In the current facial expression recognition technology, reading facial expressions more accurately by recognizing a human face in 3 dimensions (3D) is also being studied [15, 16]. Research is being conducted on robot capabilities assessing human emotions not only from the movements of the eyes and mouth, but also from the movements of human facial muscles and upper body [17]. For the response ability of humanoid robots, it is at the stage wherein creation appropriations of response sentences are evidenced, including the examination of the response vocalization intervals without discomfort, and the implementations with robot verifications and improvement [18, 19, 20, 21].

The required performance of the application for interacting with older persons requires a function that allows HNRs to speak according to the user’s remarks, rather than the user conversing according to the robot’s remarks. Alternatively, it is necessary to devise a way to give the user the feeling of having a dialog by making the user feel as if the HNR understands the user’s remarks and intentions. As reception functions for the application, in addition to the technology to accurately read the content of remarks from the user’s voice, the technology to read the user’s situation from information other than voice such as facial expressions and gestures is required. Furthermore, as its transmission functions, a response sentence, vocalization and actions the expressive function matching the user’s remarks and emotions are required.

Advertisement

4. How to develop a robot that can express verbal and non-verbal expressions

For a robot to convey verbal and non-verbal expressions like those of a human, it is necessary to have a receptor that is equivalent to that of a human. These are the sensory receptors for sight [22, 23], smell [24, 25], and touch [26]. Then, the information detected from these receptors is entered to the system. As such, it is necessary to perform machine learning based on the input information and prepare to output verbal and non-verbal expressions [27, 28]. Additionally, it is necessary for the robot to be able to perform the same movements and expressions as humans do in terms of its output [29, 30]. Depending on how similar the robot’s expression is to human behavior, however, it may lead to an uncanny valley [31] that human beings may find creepy at some points, thereby influencing the responses it can do.

4.1 Anthropomorphic form necessary in human-robot conversation

The anthropomorphic form is necessary when an HNR is expected to talk to older persons or take vital signs like human nurse do. The influence of physiognomies of HNRs is a greater determining factor apparent to the efficient response of human beings in human-robot transactive engagements. Instead of the “Uncanny Valley” captivating robot communication, it is the human-HNR interactive ‘fit’ or congruence that may be better appreciated by human persons when HNRs are appropriately described from its appearance or looks. It’s accurate and appropriate conversational capabilities further appropriate responses by HNR dependent largely on conversational communications that can easily be influenced by artificial affective communication [8].

In the case of a pleasant conversation with HNRs, human beings have a sense of affinity (Shinwa-kan) with HNRs, like appreciating their cuteness, and expressions of fun. However, human beings have also disappointed, especially when robots have poor conversational competencies. Human beings feel fear, misunderstanding, and confusion depending on the deviousness of the conversational language contents.

4.2 Roles and functions of humanoid nursing partner robots

Nevertheless, as companions in patient care, HNRs should assume multiple roles including being healthcare assistants to help with task completion. It is necessary for HNRs to possess abilities to express artificial emotions through linguistically appropriate and accurate communication processes, including nonverbal expressions with autonomous bodily movements. It is also critical that the appearance of HNRs would be more familiar, relatable, non-intimidating [32], does not cause human emotional unease and discomforts such as fear, anxiety, and suspiciousness, since human-like appearance of HNRs can lead to resistance [33].

One of the essential attributes of HNRs proposed is Artificial Affective Compassion (AAC) [8]. With the AAC (Figure 4) accentuating the significance of language in human-robot interaction, not only will physiognomies of robot impact HNRs’ value, but also its capabilities to communicate with AI for NLP. Communicating with artificial affection instilled with phonology and appropriately applied by mimicking human interactions through human features, elements designed with social and cultural nuances in communicative situations in transactive engagements with human beings may be made more valuable and meaningful for human healthcare practice.

Figure 4.

Human Emotive Impressions from Conversations During Human-robot Interaction using Artificial Affective Communication (from Ref. [8]).

Advertisement

5. An example of conversation with older persons and Pepper

The issue of conversation with Pepper includes expressions such as robot gaze [34], eye blink synchrony [35], eye contact [36], and speech [37]. This issue between older persons and Pepper with a conversation function in the application named “Kenkou-oukoku TALK for Pepper [38]” is vocalization with less intonation. This characteristic makes it difficult for older persons to understand whether Pepper’s sentence is an interrogative or a declarative sentence. Similarly, it was found difficult for older persons to understand the end of the sentence with Pepper’s talk. The pitch of Pepper’s voice is high and difficult to hear. Similarly, Pepper’s sensors may not be able to register the correct meaning of the sentence because of the older persons’ soft voice or use of a dialect. If the contents of the conversation cannot be recognized, Pepper may interrupt the conversation or suddenly change the topic, which may offend older persons. Therefore, many situations exist wherein the contents of the dialog do not match. In the current performance of Pepper, it changed the topic while the user was still thinking about the answer to Pepper’s question [39]. In addition, the operational issue of Pepper is its line of sight. If its line of sight is deviated from the talking person using Pepper’s dialog program, Pepper will proceed with the conversation while recognizing others objects around it, thereby failing its line of sight. Figure 5 presents Robot’s line of sight.

Figure 5.

Pepper’s line of visualization.

The role of an intermediary to support the conversation between older persons and Pepper is important [40]. In the current conversation with Pepper, the user must adapt oneself to the Pepper’s utterance. In this case, older persons are expected to listen to Pepper’s talk instead of doing all the talking. They must have cognitive responsiveness ability while talking with Pepper. Training to respond quickly and accurately to Pepper’s questions may be useful as a rehabilitation for cognitive function.

Furthermore, to use the current Pepper’s conversation application for cognitive rehabilitation of older persons, researchers propose a method in which older persons play a role of listeners. This role might be useful for training as they can concentrate on listening to the speaker’s utterances, understand the content of the conversation, and to convey their personal feelings to the other person. When the conversation with Pepper is over, if the intermediary will instruct older persons to recollect the conversation content, this process may lead to maintenance of memory and confirmation, and training of information processing functions for older people.

As a means of improving the Pepper robot application, it is desirable that there is no one-way conversation by Pepper when older persons adapt the Pepper’s utterances. Moreover, it is necessary to improve the conversation performance of the application so that older persons can enjoy talking with the robot for a long time. Thus, it is necessary to improve the following: (1) Timing of talk response; (2) Talk content must match the situation; (3) Appropriate reaction to the user’s speech; (4) Functions of having eye contact with the user properly; and (5) Functions of reacting to users with non-verbal expressions. It is considered to assure the accomplishment of mutual conversation by these functions.

In order to solve the problem of line of vision, it is necessary to enable robots to express verbal and non-verbal expressions at the same level as human beings. It is considered that robots are merely showing artificial verbal and non-verbal expressions through machine learning [41]. Advanced intelligence is required when trying to express verbal and non-verbal expressions by incorporating artificial thinking, mind, and compassion [42]. Therefore, it is necessary to give the computer an artificial self [43].

Demands for quality nursing care and household responsibilities may be successfully met because of anticipated automation and robotization of work activities through AI and other technological advancements [44]. AI has become the latest “buzzword” in the industry today. To date, there is no AI machine able to ‘learn’ collective tacit knowledge. AI applies supervised learning and needs a great deal of data to do so. Humans learn in a ‘self-supervised way’. Humans observe the world and figure out how it works. Humans need fewer data because humans can understand facts and interpret those using metaphors. Humans can transfer their abilities from one brain path to another. Moreover, these are skills, which AI will need if it is to progress to human intelligence [45].

Advertisement

6. Dialog systems

The types of dialog systems can be classified as follows: task-oriented dialog systems and non-task-oriented dialog systems. A task-oriented dialog system [46] performs the dialog necessary to achieve the demands of the user. The non-task-oriented dialog system [47] aims to continue the dialog itself. In order to continue the dialog, it is necessary to be able to handle non-task-oriented dialog [48]. The number of people who can converse at the same time is one robot to one person. However, in order to develop a high-performance humanoid robot in the future, it is desirable that one robot can have a dialog with three people.

Regarding the initiative of dialog, in the case of HNRs, the nurse and caregiver have the initiative. In the case of a dialog system, the distinction is made as to whether or not it has a physical body. For example, Siri does not have a body, but AI speakers and communication robots have a physical body. In this book, the HNRs having physicality are premised on the AI technology of mounted dialog processing. As a classification of learning methods, regarding modality, in the case of a voice dialog robot, learning performed from a plurality of pieces of information (multimodal) [49] is needed. For example, information on a dialog between a skilled nurse and a care recipient is recorded at the same time. Then, it is necessary to let AI learn the motions and biological data acquired by the moving images and sensors as multimodal information.

The following are the steps involved in the HNR’s generation of a response sentence containing emotions in response to the patient’s speech. These steps are illustrated in Figure 6.

  1. When the patient speaks, the HNR recognizes it as speech and converts it to linguistic information.

  2. The HNR acquires multimodal information (vocal tone, facial expression, and language) from the patient via various mounted sensors and uses machine-learning algorithms, such as neural network algorithms, to estimate the patient’s emotion [50].

  3. Latent Dirichlet Allocation (LDA) [51] is applied to the patient’s speech to detect the topics in their speech. The number of topics must be determined in advance. Here, the number of topics is determined as approximately 1000, and appropriate speech topics are acquired by clustering a set of topics obtained from the LDA based on the similarities between topics.

  4. Based on the speech topic thus obtained and semantic features of the speech content, search and detect sentences from the response database built in advance that agree with the speech topic and have similar content to the speech. Obtain the most appropriate sentence in response to the speech content as the result.

  5. Based on the response sentence thus obtained and the emotional estimated in (2), artificial emotions are synthesized using an emotion expression dictionary. To synthesize emotions, find expressions from the emotion expression dictionary semantically similar to the expressions included in the response sentence and replace them with synonymous emotional expressions in accordance with the patient’s emotions. If the patient has expressed a feeling of “sadness”, then the expressions in the response sentence are replaced by expressions that suit the feeling of “sadness”.

  6. Lastly, a sentence is generated. Here, the response sentence is synthesized with artificial emotions and is corrected if it contains unnatural elements or if it does not match the context of the preceding sentence. Moreover, add some sentences (that evoke a speech response) so as to not stop the conversation with the patient.

Figure 6.

Dialog processing mechanism using natural language processing.

This system requires speech act type estimation, topic estimation, concept extraction, and frame processing. Few studies have experimented and applied interactive processing using machine learning at a practical level in the field such as long-term care. It is considered that the cause is the difficulty of responding to situations that cannot occur in normal conversation, acquiring utterances, and predicting emotions in special dialog situations such as in long-term care.

In order to incorporate the tacit knowledge of nursing/long-term care into AI as explicit knowledge, we will record multimodal dialog data at the nursing/long-term care site. A multimodal nursing/long-term care corpus is constructed by its label to the data by skilled human nurses. Based on this corpus, we will develop a machine learning model method for predicting care labels from multimodal information. It is important to evaluate and tune the model with the goal of creating labels with high prediction accuracy. It might be an important point for future development to adapt the current mainstream method to dialog in long-term care with the older person and perform dialog processing. The following problems are considered when the target has a natural dialog with the dialog system only by adapting the current mainstream method.

  • In a general dialog system, the next utterance cannot be made without the response of the other party, so the dialog may not continue depending on the situation.

  • The general knowledge/common sense obtained from dictionary data such as Wikipedia may differ from the knowledge/common sense of the care recipient.

  • Since there are cases where long-term care dialog is conducted based on tacit knowledge, there is a lot of information that cannot be seen from the collected corpus, and it is difficult for the machine-learning model to respond.

  • Depending on the care recipient’s situation, it may be difficult to predict emotions even with multimodal information.

We also believe it necessary to build a language model suitable for caregiving dialog with the older person for use with speech recognition, by collecting and analyzing a corpus of audio-visual caregiving dialogs. Thus, not only the language model but also the acoustic model must be tailored to the nursing practice and speech of older persons. In addition, by collecting and analyzing the audio-video corpus of long-term care dialog, it will be necessary to construct a language model suitable for elderly long-term care dialog for use in voice recognizers. It is thought that not only the language model but also the acoustic model that is suitable for the nursing care site and the voice of the older person will be required.

As HNRs learn to perform nursing functions, such as ambulation support, vital sign measurement, medication administration, and infectious disease protocols, the role of nurses in care delivery will change [52].

Hamstra [53] argued:

With less burden on nurses and improved quality of care for patients, collaboration with nurse robots will improve current trends of nursing shortages and unsafe patient ratios.

While there is much to be excited about, there are aspects of nursing that cannot be replaced by robots. Nurses ability to understand the context, interpret hidden emotions, recognize implications, reflect empathy, and act on intuition, are innate and human skills that drive success as nurses.

It is not obvious whether robotics can parallel these characteristics yet, as research on these topics is still ongoing. Also, it is important to remember that high-ability humanoid robots that can function much like a human being has not been developed in 2020. It is still in the developmental stage, and, today, it cannot be used as a functional work robot that is autonomous. Therefore, intermediaries such as healthcare providers play critical roles within the transactive relations between and among robot and older persons now.

Transactive is a term focusing on the transactional nature of things [54]. As an active process, it illuminates the main feature of the relationship among human-to-human and human-to-intelligent machines, which is, always a transaction. The term illuminates the relationship between HNRs and human persons. This picture shows a transactive relationship among older persons, an occupational therapist as intermediary, and Pepper (Figure 7).

Figure 7.

A transactive relationship among older persons, intermediary, and Pepper.

Advertisement

7. Prospects for the introduction of humanoid nursing partner robots in clinical nursing practice

Since AI and robots used for nursing and long-term care are diverse, it is important to clarify what AI is, what robots are used for in nursing and long-term care, how to use it, and how to apply it to nursing care? Therefore, conducting research related to this topic is important.

In addition, we will search for solutions in clinical settings used for nursing and clarify the required performance and required functions for AI. From the perspective of nursing and medical care, it is important to establish academic disciplines that explore and collaborate with AI and robot developers from the Faculty of Engineering. For example, consider how many and what kind of nursing work the healthcare robot can perform. Looking at the current situation, various robots that do not have human shapes have already “invaded” the medical world. There are robots for improving efficiency and accuracy, such as surgical robots and robots that support dispensing operations and assisting caregivers such as providing transfer and bathing support.

In the future, it can be imagined that a robot that scans the running of blood vessels and programs injection technology will perform injections. It may be possible to secure blood vessels for intravenous injection, which can be safer than nurses and doctors. However, what should we do to ensure safety, when this robot breaks down and out of control? Like humans, it is necessary to be able to judge whether to puncture anymore.

Will humans take on the role of monitoring robots in the relationship between robots and humans (nurses)? Will it add the role of monitoring robots to keep patients safe? Alternatively, now that computers have been introduced, will robot-dedicated engineers be stationed in hospitals just as computer engineers are stationed?

Advertisement

8. Conclusion

The purpose of this chapter is to explore the issues of development of conversational dialog of HNRs for nursing, especially for long-term care, and to forecast the robot introduction into clinical practice. The major issue is HNRs verbalization, which include inappropriate intonation, voice range, and the speech speed. These issues bring challenge to promote HNRs introduction into clinical practice. In order for robot to meet the demand and situations in nursing and healthcare, it is essential to improve conversation functions and performance of HNRs, such as the ability to express appropriate verbal and non-verbal expressions. For this challenge, collaboration between nursing researchers and AI and machine developers is recommended.

Advertisement

Acknowledgments

This work was partially supported by JSPS KAKENHI Grant Numbers JP17H01609. We would like to express our sincere appreciation to all the patients and participants who contributed to this article. With many thanks to Dr. Savina Schoenhofer for reviewing this chapter prior to its publication.

Advertisement

Conflict of interest

The authors have no conflicts of interest directly relevant to the content of this article.

References

  1. 1. Muramatsu N, Akiyama H: Japan: super-aging society preparing for the future. Gerontologist. 2011;51(4):425-432. DOI: 10.1093/geront/gnr067
  2. 2. Buchan J, Aiken L: Solving nursing shortages: a common priority. J Clin Nurs. 2008;17(24):3262-3268. DOI: 10.1111/j.1365-2702.2008.02636.x
  3. 3. Murray MK: The nursing shortage. Past, present, and future. J Nurs Adm. 2002;32(2):79-84. DOI: 10.1097/00005110-200202000-00005
  4. 4. World Health Organization. Global strategy on human resources for health: Workforce 2030 [Internet]. 2006. Available from: https://apps.who.int/iris/bitstream/handle/10665/250368/9789241511131-eng.pdf;jsessionid=F4F54C19AA58FCDA94DE34746F6DA886?sequence=1 [Accessed: 2020-11-24]
  5. 5. Stuck RE, Rogers WA. Understanding older adult's perceptions of factors that support trust in human and robot care providers. In: Proceedings of the 10th International Conference on Pervasive Technologies Related to Assistive Environments, PETRA 2017; 21-23 June 2017; Island of Rhodes, Greece. New York, NY, USA: ACM, p.372-377; 2017. DOI: 10.1145/3056540.3076186
  6. 6. Kakria P, Tripathi NK, Kitipawang P: A Real-Time Health Monitoring System for Remote Cardiac Patients Using Smartphone and Wearable Sensors. Int J Telemed Appl. 2015;373474. DOI: 10.1155/2015/373474
  7. 7. Law M, Sutherland C, Ahn HS, et al.: Developing assistive robots for people with mild cognitive impairment and mild dementia: a qualitative study with older adults and experts in aged care. BMJ Open. 2019;9(9):e031937. DOI: 10.1136/bmjopen-2019-031937
  8. 8. Betriana F, Osaka K, Matsumoto K, Tanioka T, Locsin RC: Relating Mori's Uncanny Valley in generating conversations with artificial affective communication and natural language processing. Nurs Philos. 2021;22(2):e12322. DOI: 10.1111/nup.12322
  9. 9. Pepito JA, Ito H, Betriana F, Tanioka T, Locsin RC: Intelligent humanoid robots expressing artificial humanlike empathy in nursing situations. Nurs Philos. 2020;21:e12318. DOI: 10.1111/nup.12318
  10. 10. Pou-Prom C, Raimondo S, Rudzicz F: A Conversational Robot for Older Adults with Alzheimer's Disease. ACM Trans Hum-Robot Interact. 2020;9(3):Article 21. DOI: 10.1145/3380785
  11. 11. Nocentini O, Fiorini L, Acerbi G, Sorrentino A, Mancioppi G, Cavallo F: A Survey of Behavioral Models for Social Robots. Robotics. 2019,8(3),54. DOI: https://doi.org/10.3390/robotics8030054
  12. 12. Behera A, Matthew P, Keidel A, et al.: Associating Facial Expressions and Upper-Body Gestures with Learning Tasks for Enhancing Intelligent Tutoring Systems. Int J Artif Intell Educ. 2020;30:236-270. DOI: 10.1007/s40593-020-00195-2
  13. 13. Ray A. Compassionate Artificial Intelligence. Compassionate AI Lab; 2018. 258p. ISBN-10: 9382123466
  14. 14. Kerasidou A: Artificial intelligence and the ongoing need for empathy, compassion and trust in healthcare. Bull World Health Organ. 2020;98(4):245-250. DOI: 10.2471/BLT.19.237198
  15. 15. Tornincasa S, Vezzetti E, Moos S, et al.: 3D Facial Action Units and Expression Recognition using a Crisp Logic. Computer-Aided Design & Applications. 2019;16(2):256-268. DOI: 10.14733/cadaps.2019.256-268
  16. 16. Agbolade O, Nazri A, Yaakob R, Ghani AA, Cheah YK: 3-Dimensional facial expression recognition in human using multi-points warping. BMC Bioinformatics. 2019;20(1):619. DOI: 10.1186/s12859-019-3153-2
  17. 17. Vithanawasam TMW, Madhusanka BGDA: Dynamic Face and Upper-Body Emotion Recognition for Service Robots. Gaze and filled pause detection for smooth human-robot conversations. In: Proceedings of the 2018 IEEE/ACIS 17th International Conference on Computer and Information Science (ICIS); 6-8 June 2018; Singapore, Singapore; 2018, p.428-432. DOI: 10.1109/ICIS.2018.8466505
  18. 18. Milhorat P, LalaEmail D: A Conversational Dialogue Manager for the Humanoid Robot ERICA. Advanced Social Interaction with Agents. 2018;119-131.
  19. 19. Bilac M, Chamoux M, Lim A. Gaze and filled pause detection for smooth human-robot conversations. In Proceedings of the 2017 IEEE-RAS 17th International Conference on Humanoid Robotics; (Humanoids), 15-17 November 2017; Birmingham, UK; p.297-304. DOI: 10.1109/HUMANOIDS.2017.8246889
  20. 20. Aoyagi S, Hirata K, Sato-Shimokawara E, Yamaguchi T. A Method to Obtain Seasonal Information for Smooth Communication Between Human and Chat Robot. In 2018 Joint 10th International Conference on Soft Computing and Intelligent Systems (SCIS) and 19th International Symposium on Advanced Intelligent Systems (ISIS), 2018; Toyama, Japan; p. 1121-1126. DOI: 10.1109/SCIS-ISIS.2018.00176
  21. 21. Lala D, Nakamura S, Kawahara T. Analysis of Effect and Timing of Fillers in Natural Turn-Taking. In Proceedings of the Interspeech 2019, 15-19 September 2019; Graz, Austria; p. 4175-4179. DOI: 10.21437/Interspeech.2019-1527
  22. 22. Simul NS, Ara NM, Islam MS. A support vector machine approach for real time vision based human robot interaction. In Proceedings of 19th International Conference on Computer and Information Technology (ICCIT), 2016; Dhaka, Bangladesh; p.496-500. DOI: 10.1109/ICCITECHN.2016.7860248
  23. 23. Liu Z, Wu M, Cao W, et al.: A Facial Expression Emotion Recognition Based Human-robot Interaction System. IEEE/CAA Journal of Automatica Sinica. 2017; 4(4):668-676.
  24. 24. Miwa H, Umetsu T, Takanishi A, Takanohu H. Human-like robot head that has olfactory sensation and facial color expression. In Proceedings 2001 ICRA. IEEE International Conference on Robotics and Automation; 21-26 May 2001; Seoul, South Korea. p.459-464. DOI: 10.1109/ROBOT.2001.932593
  25. 25. Coradeschi S, Ishiguro H, Asada M, Shapiro SC, Thielscher M, Ishida H: Human-Inspired Robots. IEEE Intelligent Systems. 2006; 21(4):74-85. DOI: 10.1109/MIS.2006.72
  26. 26. Martinez-Hernandez U, Prescott TJ. Expressive touch: Control of robot emotional expression by touch. In Proceedings of 2016 25th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN). 26-31 August 2016; New York. New York: IEEE; 2016. p.974-979. DOI: 10.1109/ROMAN.2016.7745227
  27. 27. Li Y, Hashimoto M. Effect of emotional synchronization using facial expression recognition in human-robot communication. In Proceedings of 2011 IEEE International Conference on Robotics and Biomimetics. 7-11 December 2011; Karon Beach, Phuket. New York: IEEE; 2011. p.2872-2877. DOI: 10.1109/ROBIO.2011.6181741
  28. 28. Yoon Y, Ko WR, Jang M, Lee J, Kim J, Lee G. Robots Learn Social Skills: End-to-End Learning of Co-Speech Gesture Generation for Humanoid Robots. In Proceedings of the 2019 International Conference on Robotics and Automation (ICRA). 20-24 May 2019; Montreal; New York: IEEE; 2019. p.4303-4309. DOI: 10.1109/ICRA.2019.8793720
  29. 29. Hua M, Shi F, Nan Y, Wang K, Chen H, Lian S. Towards More Realistic Human-Robot Conversation A Seq2Seq-based Body Gesture Interaction System. In Proceedings of the 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS): 3-8 November 2019; Macau; New York: IEEE; 2019. p.247-252. DOI: 10.1109/IROS40897.2019.8968038
  30. 30. Salem M, Rohlfing K, Kopp S, Joublin F. A friendly gesture: Investigating the effect of multimodal robot behavior in human-robot interaction. In 2011 RO-MAN. 31July-3 August 2011; Atlanta; New York: IEEE; 2011. p.247-252. DOI: 10.1109/ROMAN.2011.6005285
  31. 31. Mori M: The Uncanny Valley. Energy. 1970;7:33-35 (in Japanese).
  32. 32. Prakash A, Rogers WA: Why Some Humanoid Faces Are Perceived More Positively Than Others: Effects of Human-Likeness and Task. International Journal of Social Robotics. 2015;7(2):309-331. DOI: 10.1007/s12369-014-0269-4
  33. 33. Müller BCN, Gao X, Nijssen SRR, Damen TGE: I, Robot: How Human Appearance and Mind Attribution Relate to the Perceived Danger of Robots. International Journal of Social Robotics. 2020. DOI: 10.1007/s12369-020-00663-8
  34. 34. Boucher JD, Pattacini U, Lelong A, et al.: I reach faster when I see you look: gaze effects in human-human and human-robot face-to-face cooperation. Frontiers in Neurorobotics. 2012;6(3). DOI: 10.3389/fnbot.2012.00003
  35. 35. Tatsukawa K, Nakano T, Ishiguro H, Yoshikawa Y: Eyeblink synchrony in multimodal human-android interaction. Scientific Reports. 2016;6:39718. DOI: 10.1038/srep39718
  36. 36. Xu T, Zhang H, Yu C: See you see me: the role of eye contact in multimodal human-robot interaction. ACM Trans Interact Intell Syst. 2016;6(1):2. DOI: 10.1145/2882970
  37. 37. Cid F, Moreno J, Bustos P, Núñez P: Muecas: a multi-sensor robotic head for affective human robot interaction and imitation. Sensors. 2014;14(5):7711-7737. DOI: 10.3390/s140507711
  38. 38. XING INC. Kenkou-oukoku TALK for Pepper [Internet]. Available from: https://roboapp.joysound.com/talk/ [Accessed 2020-10-2]
  39. 39. Miyagawa M, Yasuhara Y, Tanioka T, et al.: The Optimization of Humanoid Robot’s Dialog in Improving Communication between Humanoid Robot and Older Adults. Intelligent Control and Automation. 2019;10(3):118-127. DOI: 10.4236/ica.2019.103008
  40. 40. Osaka K, Sugimoto H, Tanioka T, et al.: Characteristics of a Transactive Phenomenon in Relationships among Older Adults with Dementia, Nurses as Intermediaries, and Communication Robot. Intelligent Control and Automation. 2017;8(2):111-125. DOI: 10.4236/ica.2017.82009
  41. 41. Greeff J, Belpaeme T: Why Robots Should Be Social: Enhancing Machine Learning through Social Human-Robot Interaction. PLoS ONE. 2015;10(9):e0138061. DOI: 10.1371/journal.pone.0138061
  42. 42. Asada M: Development of artificial empathy. Neuroscience Research. 2015;90:41-50. DOI: 10.1016/j.neures.2014.12.002
  43. 43. Linda S: Endres, Personality engineering: Applying human personality theory to the design of artificial personalities. Advances in Human Factors/Ergonomics. 1995;20:477-482. DOI: 10.1016/S0921-2647(06)80262-5
  44. 44. “Future of Work 2035: For Everyone to Shine” Panel. “Future of Work: 2035”-For Everyone to Shine-[Report] [Internet]. 2016. Available from: https://www.mhlw.go.jp/file/06-Seisakujouhou-12600000-Seisakutoukatsukan/0000152705.pdf. [Accessed 2020-11-30]
  45. 45. Badimo, KH. How Artificial Intelligence can help to address some of the limitations of knowledge management. 2019. Available from: https://www.linkedin.com/pulse/how-artificial-intelligence-can-help-address-some-knowledge-badimo/ [Accessed 2020-11-30]
  46. 46. Liao K, Liu Q , Wei Z, et al.: Task-oriented Dialogue System for Automatic Disease Diagnosis via Hierarchical Reinforcement Learning. ArXiv. 2020; arXiv preprint arXiv:2004.14254.
  47. 47. Isoshima K, Hagiwara M. A Non-Task-Oriented Dialogue System Controlling the Utterance Length. In Proceedings-2018 Joint 10th International Conference on Soft Computing and Intelligent Systems and 19th International Symposium on Advanced Intelligent Systems (SCIS-ISIS 2018): 5-8 December 2018; Toyama, Japan. Institute of Electrical and Electronics Engineers Inc.; 2019. p.849-854. DOI: 10.1109/SCIS-ISIS.2018.00140
  48. 48. Zhou Y, Black AW, Rudnicky AI. Learning Conversational Systems that Interleave Task and Non-Task Content. In: Proceedings of the Twenty-Sixth International Joint Conference on Artificial Intelligence (IJCAI-17): 19-25 August 2017; Melbourne, Australia. International Joint Conferences on Artificial Intelligence; 2017. p.4214-4220. https://www.ijcai.org/Proceedings/2017/0589.pdf
  49. 49. Fernández-Rodicio E, Castro-González Á, Alonso-Martín F, Maroto-Gómez M, Salichs MÁ: Modelling Multimodal Dialogues for Social Robots Using Communicative Acts. Sensors. 2020;20(12):3440. DOI: 10.3390/s20123440
  50. 50. Khalil RA, Jones E, Babar MI, Jan T, Zafar MH, Alhussain T: Speech Emotion Recognition Using Deep Learning Techniques: A Review. IEEE Access. 2019;7,117327-117345. DOI: 10.1109/ACCESS.2019.2936124
  51. 51. Blei DM, Ng AY, Jordan MI: Latent Dirichlet Allocation. Journal of Machine Learning Research. 2003;3:993-1022.
  52. 52. Robert N: How artificial intelligence is changing nursing. Nursing Management. 2019;50(9),30-39. DOI: 10.1097/01.NUMA.0000578988.56622.21
  53. 53. Hamstra, B. Will these nurse robots take your job? Don't freak out just yet [Internet]. 2020. Available from: https://nurse.org/articles/nurse-robots-friend-or-foe/ [Accessed: 2020-12-23]
  54. 54. Tanioka T: The Development of the Transactive Relationship Theory of Nursing (TRETON): A Nursing Engagement Model for Persons and Humanoid Nursing Robots. Int J Nurs Clin Pract. 2017;4:223. DOI: https://doi.org/10.15344/2394-4978/2017/223

Written By

Tetsuya Tanioka, Feni Betriana, Ryuichi Tanioka, Yuki Oobayashi, Kazuyuki Matsumoto, Yoshihiro Kai, Misao Miyagawa and Rozzano Locsin

Submitted: 05 January 2021 Reviewed: 25 June 2021 Published: 03 August 2021