Open access peer-reviewed chapter

Mechanical Empathy Seems Too Risky. Will Policymakers Transcend Inertia and Choose for Robot Care? The World Needs It

Written By

Johan F. Hoorn

Submitted: 24 November 2016 Reviewed: 05 June 2017 Published: 06 December 2017

DOI: 10.5772/intechopen.70019

From the Edited Volume

Robotics - Legal, Ethical and Socioeconomic Impacts

Edited by George Dekoulis

Chapter metrics overview

1,761 Chapter Downloads

View Full Metrics

Abstract

An ageing population, increasing longevity and below-replacement fertility increase the care burden worldwide. This comes with age-related diseases such as Alzheimer disease and other dementias, cardiovascular disorders, cancer and—hardly noticed—pandemic loneliness. The burden, both emotionally and economically, starts to become astronomical and cannot be carried by those few who need to combine care with work and family. Social solidarity programmes are part of the answer, but they do not relieve the human helper. Yet, many hands are needed where but a few are available. Capacity issues can be solved by the introduction of care robots. Research shows that state-of-the-art technology is such that care robots can become nonthreatening social entities and be accepted and appreciated by the lonesome. Massive employment of such devices is impeded, however, sufficient governmental support of R&D is lacking—financially and regulatorily. This is where policymakers should step in and get over their moral prejudices and those of their voters and stop being afraid of losing political backing. They will regain it in the long run.

Keywords

  • global ageing
  • healthcare system
  • social robots
  • innovation policy

1. Introduction

It is no news that we have a global ageing problem. For years now, the World Health Organization, the United Nations and national governments publish world-population statistics that show contracting demographic pyramids. Current world population is over 7 billion people. Projections for 2050 are between 8.3 and 10.9 billion, depending on low or high fertility assumptions ([1], p. 2). The Population Reference Bureau expects [2] that by 2050, earth will inhabit about 1.5 billion of people of 65 or older (16% of the world population), whereas in the 1950s, this was a mere 5%. The United Nations expects 32% of 60+ world citizens by 2050 ([1], p. 4, bullet 5).

Not only that, the number of elderly already is higher than the number of younger people (under age 15), and around 2050, estimates are that there will be twice as many elderly as children ([1], p. 4, bullet 5). Below-replacement fertility is persistent for about 30 years now ([1], p. 5, bullet 14), and one can see that in China, for instance, one-child policy together with a growing economy will lead to many poor Chinese parents taken care of by a few prosperous children. Yet, more elderly that are taken care of by a minority of the young are a direct threat to economic growth, worldwide. To counter such adverse effects, Germany plans to increase the influx of immigrants with imaginably, more integration problems. For the elderly, top priority is to counter loneliness, and care managers also state that self-management is prime.

The number of single-person households is increasing, and people are getting older than before ([1], p. 6, bullet 19). Moreover, people expect low-cost 24 h high-quality care. A considerable number of older people suffer from loneliness and social isolation, which negatively affects their health and well-being [3]. Loneliness is an epidemic in modern society [4]. Increasing longevity not only adds to an ageing population feeling lonely but also introduces specific health issues, such as increasing cardiovascular diseases and cancer. In their joint 2011 publication, the National Institute on Aging, National Institutes of Health and World Health Organization estimated that globally, between 27 and 36 million people suffer from Alzheimer disease or other dementias ([5], p. 14). After age 65, the occurrence of dementias doubles with every 5 years of age (ibid.). In addition, annual new cases of cancer will reach 17 million by 2020 and then continue to grow up to 27 million by 2030 ([5], p. 19). Long-term care is required and will be centred on ‘home nursing, community care and assisted living, residential care, and long-stay hospitals’ ([5], p. 23; also, see Ref. [6]).

In view of what lies ahead of us, it is not only the question how many people we can recruit from a declining population to take care of the old and frail but also how many healthy people will be left to keep the economy going that covers the formidable expenses involved? How will ‘beanpole families’ ([5], p. 22) make sure that enough informal care is supplied to keep their parents ‘healthy longer, delaying or avoiding disability and dependence’ (ibid., p. 23) and who is going to earn all that money to cover the expenses?

The good news is that apart from social solidarity programmes, which still require the effort of many, there are technological innovations almost ripe enough to help make up for the loss in care support. By now, evidence accumulates that robots can for a large part compensate social isolation and sustain self-management, much to the relief of the elderly (e.g. [79]). They can assist (e.g. the Riba II Care Support Robot for Lifting Patients); they can monitor (e.g. the NEC PaPeRo, Partner-type Personal Robot, for telecommunication), and they can provide companionship (e.g. AIST Paro, a therapeutic robot seal).

A number of social robots are available on the market and already successful. Nao/Zora does physical exercises, brain games and dance moves and can lead the bingo. Paro, the baby seal, squeaks, cries and can recognize its given name and responds to touch. It can reach deep-dementia patients where humans have lost all contact. Kaspar is used for autistic children to practice with emotion expression and ways of conduct. Zeno and his sister Alice R50 work with the elderly; and then of course, there are robot surgery arms, drawer pullers and lid openers, but they do not aim for companionship and bonding.

Advertisement

2. Problem statement

Almost everyone in the care supply chain is in favour of employing social robots, each for their own reasons. Managers see cost reduction; care professionals see a decrease in work pressure (although they fear dehumanization). Although a little ashamed [10], informal caretakers admittedly find it a relief, and clients are happy being helped [11]. The laggards are the top decision-makers in national politics, in multinational corporations and the board of directors of insurance companies and housing cooperatives. Their contribution would lie in financial and regulatory aid to make novel laboratory findings available to the public at large. Instead, they constantly worry if they will lose the support of their voters, shareholders and allies, and hence, they freeze. At face value, mechanical empathy seems politically too risky.

Advertisement

3. Purpose of the chapter

On behalf of the old and lonely and their loved ones, this chapter wants to mobilize policymakers to take action and get involved to deal with a global situation that is rapidly deteriorating. The two things they have to do are first to provide just enough financial leeway to develop laboratory prototypes into devices that are ready to market, facilitating the take-up by service providers. The other thing is the reduction of rule pressure so that innovations can be easily tried and tested without worrying too much about ornate grant proposals, detailed managerial accounts, intellectual property rights, codes of medical ethics, etc. Those demands may punch in once the robots are market-ready.

The Japanese have understood this better. In 2013, the Tokyo Times announced 25.3 million dollars in governmental subsidies to stimulate a knowledge-driven care-robotics industry [12]. Transparency Market Research (cited in The Economist) expected in 2014 that the market for medical and care robots will rise from US$ 5.5 billion in 2011 to US$ 13.6 billion in 2018 [13]; so where is the rest of the world? They stand by and watch.

Advertisement

4. Social action

Those who have a social network currently mobilize their family and friends to organize their future care. This is done in small networks where professionals and informal caretakers negotiate care tasks and so release the formal care networks (e.g. [14]). To enable people to combine care with work, it is suggested that organizations for home care should assign professionals to extra-residential caretakers of single-household clients [15]. If intercultural mediation is required, social workers may be engaged (cf. [16]). In spite of the good intent, however, the care burden may be so heavy that work and care cannot be combined effectively [17].

Of course, the call for solidarity is genuine, and that people should help each other goes without saying. But apart from this moral stance, it should be possible, we should be able to, and we should have the capacity to deliver. Will there be enough hands, and can we afford those hands, a very banal but very practical matter. So where robot help is feared because of humanitarian considerations, humanitarian considerations are at stake if we cannot help everyone equally well, and people become emotionally and physically impoverished. Robots are feared because of job loss, but in the long run, there will be insufficient people for those jobs anyways, so we better make sure robots are up to speed to fill in for us. Voters may demand the creation of more jobs, but what are they good for if soon there will be no one to fulfil them?

Advertisement

5. Low-hanging fruit

Luckily, little is needed technology-wise to make a positive contribution to the health and well-being of the socially impaired. For example, research indicates that a feeling of social presence and relatedness is already established if the robot simply invites a response [18], is modest and polite [19] and has a kind of cuteness to it (e.g. [20]). That is not too hard to establish. Toy factories have lots of experience (enter Bandai’s Tamagotchis). The only thing toy makers do not have is knowledge of what to program, how to keep the interaction going, and what health protocols to follow (exit Tamagotchis).

However, there is quite some concern whether empathy should be left to a machine. Point is that human care is not always that empathic either. Day care starts in the morning (half an hour), in the afternoon the nurse looks whether the client is alright (15 min) and in the evening, the client is helped with dinner (another 15 min). The other 7 h of the day is filled with staring out of the window and counting the busses that drive by. Loneliness can be so aggravating that anything will do—even a friendship machine.

We conducted a social experiment with robot Alice on the couch of three grandmothers [21]. This experiment was recorded in a by now worldwide known documentary Alice cares, directed by Sander Burger in 2014 [22]. The experiment was to find out if a human can build up a relationship with a machine that pretends to be empathic. If the documentary (and also our other research) makes one thing clear, it is that this is so (which is since the Tamagotchi craze perhaps not surprising). We know better now what to look for when building a social application. Good speech is more important than serving coffee. Automatic conversation analysis is more important than gait technology. Face recognition does far more than hand gestures. In addition, the robot should not talk too much about itself but invite the users to share their feelings. Above all, the robot itself must be placed in a position of dependence and assist the needy in an unobtrusive manner [21].

Is this different for men and women? We do not have hard results here, but observations led us to believe that men are equally interested as women, but they take a detour. Women directly start to bond with the machine, whereas men first test it and want to know how the technology works, before they make friends with it.

One might counter that robots are rational and cannot work with emotions. Point is that robots do not necessarily have to have an empathic understanding of humans in order to raise empathy. People love their cats and dogs and attribute all kinds of human traits to them. Likewise, when the robot falls of a chair or is ‘hurt’ in any way, people feel sorry for the machine. In one of our studies [23], we observed that people reckoned that after maltreatment, the robot must feel sad (!).

How important is it, then, to formally model emotion processes and try to simulate empathy with a machine? Scientifically, this is very important. In trying to make a formal model, the scientist stumbles upon his/her own implicit assumptions and finds many research gaps. When the model works fine (is inherently consistent) and in simulation, participants cannot distinguish its behaviour from that of humans (is empirically valid), we may argue that we understand human nature better than before. You only understand it if you can build it.

In application, proper emotion models are less important. Users find other functionalities (e.g. chatting about the weather) far more interesting, also in companion machines. Emotion is a nice to have, not a must have. If some emotional stimulus comes from the robot, that will do the trick. People will then project and attribute all kinds of human-like qualities to the machine and deem it ‘emotionally responsive’. They do not hold it for a human, but the responses are recognizable enough to evoke a human response, like an animal does. The way that is achieved, through a full-fledged psychology-based artificial intelligence (AI) or some artistic-engineering trickeries, is indifferent. They take the bait anyways.

Advertisement

6. What a social robot is capable of, now and in the future

The Alice R50 we work with is designed by David Hanson from Texas and manufactured by the firm Robokind. The machine we have right now is reassembled from the working parts of two machines that collapsed most of the time, and the basic software that drives the motors, speech engine, and cameras was completely reprogrammed by Germans Media from Amsterdam. Apart from that, we developed software programs to steer Alice’s and other robots’ behaviours, for example, to simulate friendship with the user [24].

In the documentary film Alice cares, you see Alice in action on the sofa of three elderly ladies. In the beginning, the ladies were invited to our office (well-controlled environment), talking to a robot that followed our conversational version of the MANSA protocol for care evaluation, using closed questions (also well controlled).1 That led to very grumpy replies, but yes, here the robot acted autonomously most of the time. In the apartment of the ladies, however, we were in open environments, and the conversations went from coffee to photo books to football to singing and children and back again, so here we had Alice in remote control and she was helped quite a bit by the engineer. The only things she did autonomously in this situation were following faces, blinking, head motion, lip-sync and translating speech input to text and typed-in text back to speech.

At the moment, we are in the process of making better conversational software. In view of current state of the art speech technology, natural language processing, conversation analysis and knowledge representation in the semantic web, it should be feasible to have a simple well-defined chat about the coffee, the weather, the family, the money machine, an appointment with the doctor and so forth. But that does take some more development and technology integration work.

Thus, if you want to talk about ‘a social robot’, you have to distinguish between the robot as hardware (the puppet) and the robot as software (machine behaviour). Within the software, there are three things it should be doing: perceiving (data input), processing (information throughput) and executing actions such as speech and facial expressions (output).

In addition, you should distinguish between well-controlled environments such as lab settings and open environments such as a living room. The software that potentially can drive a robot’s behaviour is really good at throughput in well-controlled environments (e.g. [24]).

What robot software is really bad at is perception and execution, which is limited to following a face (input) or standard speech recognition and production and some head movements, smiles and frowns (output). Yes, there are better systems available (e.g. the Google Assistant), but common researchers have no access to them (e.g. IBM Watson—the full version, that is). No one is allowed to see the source code, and private data are free for taking. Moreover, most robot hardwares are engineered pretty badly and, after intensive use, show quite some wear and tear.

In other words, state-of-the-art social robots could make better use of the AI that is already available such as computer vision and deep learning. On the output side, there are hardly sufficient expression possibilities: robots should have better facial expressions, gestures, conversational phrases, etc. However, that is not AI but interaction design. Many AI modules for learning, reasoning, etc. are in a testing phase and sometimes seem to be working pretty well. However, the numbers those systems produce may indicate that the system ‘likes its user’, but that is not translated yet into communication that is fit for human consumption (speech, facial expression, etc.). So the robots one sees on the Internet are partly a mock-up, partly prototype and partly an autonomous system. That the robot is operated in remote control and is not independent is also due to the fact that the software engineering of all the different components is not done right yet: too many loose ends and badly integrated systems and too unreliable for grandma on the couch. But we are progressing.

If all this is ready, we may have a system that can independently speak and have some understanding of its user and the situation both are in. Unavoidably, people will think of feature movies in which robots take over the world and in which advanced AI outsmarts humanity; and also serious people such as Hawking and Musk add fuel to the flames by declaring that AI may be the biggest threat to humanity ever. But why would you build a car without a break or an airplane without landing gear? You do not have to learn a robot everything we know. You do not have to give it the capacity to learn evil things on its own. AI does not grow from a tree. Humans create it. My standpoint is that humans are the biggest threat to humanity and in their pursuit of supremacy, some nasty people produce nuclear bombs, war gasses and, yes, also killerbots. But they do not have to. Yet, we allow them to. We use anything as a weapon, anything. That is a humanistic and not a technological problem. It is politics in its essence.

Advertisement

7. People respond differently to social robots than to smartphones or fellow humans

Certain critical thinkers do not get what the fuzz is all about. If a social robot is merely a plastic hull around an ordinary computer that has some fancy AI, then why bother about the childish appearance? You could have everything installed on your smartphone, and it would do the same. You could even use an animated avatar to make it more human-like, but actually even that is pure nonsense. What is the difference between this technology and, for instance, the use of monitors in the context of telecare, apart from the visual image of a robot?

All this is true, but the visual image of the robot is the quintessence. Critical thinkers look at technology way too rational and functional. This is psychology. The back side of the temporal cortex of babies of a few days old already responds to faces, not to smartphones. In confrontation with a social robot, the human emotional system takes precedence over cool cognition as soon as the relevance of an unfulfilled need—here companionship—is higher than the question whether that need is met by a human being or a machine. This is why a robot—physically present, eyes, voice—is more effective than a monitoring camera, an app on a smartphone or a 2D avatar on a screen. People tend to forget that there may be a human operator behind the machine—and here we touch on shaky ethical ground obviously. The robot is just human enough to invite self-disclosure of the user (cf. [25]) but socially peripheral enough that a person may not expect any consequences to the life confessions one may do. The response to the robot is stronger and more personal than to an avatar on a screen or a monitoring camera. Hence, people take the role of grandparent over the robot, feel the need to feed it and look straight into the camera-eyes. The response also is more personal than to the average human being because there are no social repercussions to what you say or do. Hence, revelations and family histories are told that no one ever heard of.

Advertisement

8. Security: you lose self-disclosure when outsiders start snooping

Social robots invite users to disclose personal information—even things they do not tell to other human beings. That is a powerful quality for therapeutic purposes; and if the robot is connected to a care centre, this information can help good-willed caretakers understand how to help or treat a person. But it also makes privacy violations all too easy for those snooping around: managers, insurers, governments, tech companies, tabloids and the police.

Suppose that a social robot is monitoring a patient, autonomously, and equipped to call the ward when someone falls (fall detection). The first problem is that of the missed signals and false alarms. How does a robot know that someone really has tripped over? There are so many situations in which a person may hit the floor: for an incidental exercise, playing with the cat, making fun with the grandchildren, just fooling around and simulating a fall to draw attention. What exactly are the physical signals that indicate ‘emergency’ and how to make sense of the context in which those physical indications occurred? Either we have a robot that constantly warns for nothing (false alarms) or that is so ‘contextually sensitive’ that it overlooks the crucial symptoms (missing the signal). This makes the robot unreliable, and even when something is the matter, the nurses at the ward may stumble into ‘the pitfall of the liar’, not believing the robot when they actually should. Put the robot back into remote control then. Have a person look over its shoulder. But then we are also back to spying around, gossiping or, worse, not telling the patient that s/he is observed.

Robots nowadays are often implemented based on remote servers, for instance, for converting speech to text. That may actually violate people’s privacy [26]. The server can easily keep a permanent record of what people have said and make it available years later to companies or governments that might mean those people no good (ibid.). This is bad news if robots are to care for millions of people in a decade or two. The goal is to stop pervasive poking and prying.

The only solution would be to convert the audio to speech locally in the robot’s on-board computer, never transmitting it elsewhere [26]. The goal is to make sure the robot does not send the speech data anywhere, not as audio and not as text. Rejecting speech-to-text translator services is a prerequisite for this. However, just rejecting those services is not enough. We need to be sure the robot does not in fact send anyone that data. For the same reasons, the software in the robot should be free software. Otherwise, the next ‘upgrade’ might include surveillance—as in so many other products (ibid.). All this does not necessarily mean to disconnect the robot from the Internet. All computing can be done inside the robot, and the Internet connection is used merely to receive and send other data when the user wants to communicate with someone else. The robot must not be designed to farm out its own computational activities and function to some other computer or remote service. To a limited extent, this may reduce the potential usefulness of the robot, but ‘freedom sometimes requires a sacrifice’, and usefulness is not the only virtue to strive for (Ref. [26]).

For the robot not to be someone else’s snoop, it must not send the user’s commands (as audio or as text) to anywhere else. In practice, why would it send the audio to somewhere else? For speech-to-text conversion. Therefore, to avoid sending the audio to somewhere else, the robot must do speech-to-text conversion locally. Doing the conversion locally is necessary, but not sufficient, for respecting privacy. The robot could send the audio somewhere else, even after doing the conversation locally, but it must not do so [26].

Of course, as soon as there is some sort of transmission of information over Wi-Fi (e.g. between the server in the closet and the robot on the couch), the locally stored information is open to interception again. Even worse, one could plug a USB stick into the robot’s back and download personal information manually. Luckily, there are some protections against such special, targeted surveillance. They are legal protections: for the state to do this, it needs a court order. For others to do, it is a crime. Probably, few people will be the target of a search warrant. Most likely no one will burglarize someone’s home and steal personal data, but you never know (Ref. [26]).

The great danger is from social robots and other systems that systematically surveil everyone (Google, Facebook, Skype, Amazon). That is what we have to fight. Social robots could be such systems, or we could design them not to be (Ref. [26]).

Advertisement

9. Moral machines

We could design a robot that refuses to reveal personal information to those who are not authorized by the end user. Police systems also have such graded access. Some information is accessible to the street cops, other merely to the detectives. Whether robots make a positive or negative contribution to society depends on how we humans deal with them. For example, if people learn robots nothing about weaponry, they will not know anything about weapons. But there are always people who still want to send a robot to war and teach it how to use weapons (e.g. Russia’s Fedor). How robots act depends purely on what they learn from us or what we allow them to learn by themselves.

Instead, a robot also could be used as a moral compass for the biased decisions people make. A robot can reason from first principles and is not led by personal emotions. If a medical specialist was forced to choose between saving your child or five others, I know what you choose. The moral robot would not: 5 is more than 1. Your kid dies. Game over [27].

Advertisement

10. No cold motives in a declining industry

Managers will fire personnel, that is, cold-hearted managers will, because greed is their main motive. If life fulfillment was their driver, managers would train their staff to work with robots that support people in their jobs as a relief of tedious and undesired work, leaving the enjoyable work to human employees. Suddenly, workers work more efficiently, enthusiastically and as a result effectively. Productivity increases per capita and so does the manager’s annual profit share.

Uninformed directors will decide on questionable grounds to encourage the use of technology in healthcare and other service professions, and indeed, directors are notoriously uninformed; and once they are informed, they become notoriously hesitant. They tend to follow bad policy if everybody else does rather than be the forerunner of good policy. That is why directors are not leaders.

But after management sacked the outmoded workforce, they will find out they have to hire a different type of professional. Although the unions are on strike—the rally against robots—they have to learn that if you do not adapt, you die out. Professions vanish primarily because their practitioners fail to adapt to changing circumstances. Clog making and lace bobbing have become folklore. Others will jump at the opportunity and fill up the niches in the professional market. Before the twentieth century, nobody ever thought that air pilot could be a profession. Who would ever have thought that ‘ethical hacking’ is a profession today? Soon we need plenty of robot programmers, interaction designers, conversation analysts, robot liability lawyers and engineers in the medical domain, education, hospitality, etc. and not before long, HRM will be short for human-robot relationship manager.

Care is costly, financially and emotionally; and we soon will have emptied both resources. Robots may relieve both workforce and clients. There is one warning, however: if robots actually make true what they potentially promise and indeed reduce care pressure, this should by no means lead to dim-witted managerial cutbacks and layoffs. Certainly, cost reduction is much wanted but only if the quality of care is guaranteed if not enhanced. Robots have their unique selling points, but that inevitably means that humans have other qualities that together bring care to the next level, such that happier people are healthier people and, thus, cost less. If robots are capable of keeping elderly people 6 months longer at home, we have a business case in place.

11. What makes a robot special (and so, what a human)?

If programmed well, a social robot invites self-disclosure (cf. [25]). People confess to strangers things they will never do to their family or caretakers. Such confessions may not only disclose important medical information (e.g. not taking medication); expressing oneself has therapeutic value of its own.

With sufficient security measures installed, social robots guarantee the privacy of information. They do not gossip or go behind your back. There are no social repercussions to what you say or do. Social robots will not laugh at you if your mouth is sore, and you do not wear your dentures. They have patience, patience if you are slow in responding and patience in reminding you of an appointment, to do exercises. They have excellent memories. They can play memory games with you.

If designed well, their appearance and behaviours are nonthreatening and child-like. That also gives room for asking silly questions (fun), asking things it already knows (training your memory) and asking impolite questions (evoking self-disclosure), like a grandchild.

Social robots may invite social and physical activation, not for you but to humour them. So going out to see the neighbour is not because you are lonely but because the robot wants to have a chat. And finally, social robots are in no need of claiming any social space. You can complain to them for hours on end, and they do not feel that attention should be equally distributed among the conversation partners or that you bore them. On the contrary, if they bore you, you simply turn them off.

A smartphone or tablet will not do the trick. Devices for the elderly have to have a physique, they should have eyes and a voice and then they are ready to deliver whatever functionality in a highly appreciated manner (cf. [28], p. 96).

While robots take care of the mundane repetitive tasks (e.g. chit-chat, calendar keeping, exercise coaching); humans can do the sophisticated work: if someone has a stroke or needs a true shoulder to cry on. But because the human caretakers did not burn their energy on the tiresome day-to-day chores, they are happy to provide full attention and devotion in cases of emergency or psychological despair.

12. Research programme

The documentary Alice cares [22] recorded the pinnacle of a research programme that investigated the direct effects of robot companionship on the loneliness of elderly. However, we also conducted more indirect studies, such as the study I did with Eva N. Wijker, looking into how elderly of the future (N = 128, between 50 and 65, Mage = 56.6, SD = 4.5, 73% female) would feel about social robots at the age of 85, both in a hedonistic sense (i.e. companionship) and a utilitarian sense (i.e. as exercise coach). Half of the respondents received the depiction of a senior woman conversing with a Nao/Zora robot; the other half saw the same woman physically exercise with Zora. An online structured questionnaire with Likert-type items and rating scales (1 = totally disagree, 6 = totally agree) then inquired about the robot’s good intent (scale ethics, M = 5.00, Cronbach’s α = 0.77), its action possibilities (scale affordances, M = 4.12, α = 0.92), how important the robot was to the respondents’ goals and concerns (relevance, M = 3.75, α = 0.91), their intentions to use the robot (M = 3.39, Spearman ρ = 0.83), how emotionally involved (M = 1.95, α = 0.93) they were with the robot and in how far they felt at an emotional distance (M = 4.08, α = 0.90). Divergent validity of the measurement scales was confirmed by principal component analysis (oblique oblimin rotation).

We ran a one-way MANOVA with function (companion vs. exercise coach) as between-subjects factor and ethics and affordances as within dependent variables. Effects of function on ethics were not significant (F < 1), but on affordances they were (F(1,117) = 5.95, p = 0.016), indicating that the functionality of the robot as exercise coach was rated higher than the robot as companion. Another One-way MANOVA with function (companion vs. exercise coach) as between-subjects factor had use intentions, involvement and distance as within dependents. Pillai’s Trace indicated significant effects of function on the dependents (V = 0.13, F(3, 124) = 6.10, p = 0.001), while contrast analyses showed that exercising evoked significantly more involvement (M = 2.23, SD = 1.62) and less distance (M = 3.74, SD = 1.28) than companionship (involvement: M = 1.62, SD = 0.68, p = 0.000; distance: M = 4.49, SD = 1.14, p = 0.001). There were no significant differences in use intentions (p = 0.065). Covariate analysis showed an effect of gender on distance (V = 0.03, F(3,118) = 1.19, p = 0.014): women felt more emotional distance towards the robot than did men (F(1,120) = 4.34, p = 0.039). Thus, robot Zora as an exercise coach elicited more warm feelings (involvement) and less detached feelings (distance) than as a companion robot, whereas intentions to use the machine did not differ beyond coincidence.

Next we performed regression analyses, particularly Hayes’ mediation-moderation analyses, to find out across both functions how relevant ethics and affordances would be for use intentions, feeling involved with the robot or at a distance. Across functions, ethics (b = 16, p = 0.541) did not influence relevance, but affordances did (b = 0.53, p = 0.000). With relevance as a mediator, affordances were indirectly effective for use intentions (b = 0.68; p = 0.000), involvement (b = 0.43, p = 0.000) and distance (b = −0.60, p = 0.000). Additionally, affordances also had a direct effect on involvement (b = 0.25, p < 0.01) and distance (b = −0.48, p = 0.000), even without touching upon the goals and concerns of the respondents. Scrutiny of the individual effects showed that indeed affordances were directly influential for involvement (bootstrap b = 0.25, p = 0.001, [0.142, 0.354]), distance (linear b = −0.48, p = 0.000) and use intentions (linear b = 0.54, p = 0.000). Thus, the more action possibilities were perceived in the Zora robot (whether exercising or conversing), the higher the use intentions, the higher emotional involvement was with the robot and the least distance was felt.

According to the respondents, the function of companionship showed fewer action possibilities (affordances) than exercising. Therefore, we explored in how far affordances even for the function of companionship would explain relevance, use intentions, involvement and distance. Regression analysis according to Hayes showed a significant effect of affordances on relevance (b = 0.44, p = 0.000) as well as a direct effect on involvement (b = 0.22, p = 0.001) and distance (b = −0.40, p = 0.000). Furthermore, affordances were directly influential for use intentions (b = 0.22, p = 0.001). Affordances explained 15% (R2 = 0.15) of the variance in use intentions, 19% of involvement (R2 = 0.19) and 22% of distance (R2 = 0.22). In repeating these analyses for the function of exercising, surprisingly, affordances explained less variance in use intentions (11%) and distance (16%) than for the function of companionship. For exercising, affordances had no direct effect on involvement with the robot at all.

We concluded that elderly of the future at face value see more possibilities in exercising with a robot than in companionship. Ethical concerns were not an issue for both functions, so we need not worry about that in future implementations. These elderly of the future thought overall that Zora meant it very well (MEthics = 5). They had neutral to moderate intentions to use the machine (MUseInt = 3.39), which did not differ between exercising and company and deemed the robot somewhat relevant (MRelevance = 3.75).

However, they had more friendly feelings (involvement) and felt less aloof (distance) when the robot was exercising than when it was deliberately used as an antidote to loneliness. In other words, telling elderly of the future they may become lonely and for that they will receive a robot may to them not be a pretty outlook. But the detour through an exercise coach that by the way also counters loneliness by building up a friendship might just do the trick.

In general, if action possibilities (affordances) touch upon important goals and concerns (relevance), friendly feelings (involvement) and wanting to use a robot (use intentions) increase, whereas feelings of distance drop. This we also found for the few action possibilities respondents saw in the companion robot. Remarkably, however, respondents also thought that even if companionship-affordances were not relevant to them personally, they still showed direct effects on feeling involved with the machine and feeling less at a distance, and most strikingly, they were more willing to yet use the machine: Seeing is believing. By keeping the robot out of the sphere of personal relevance, people already get used to the idea of having one for companionship later.

Of course, the respondents in this study were still socially active; many of them had (part-time) jobs. In due time, however, when family and friends become fewer, loneliness may strike out. Therefore, to know about the mitigating effects of social robots on loneliness in the long run, we plan on a new line of research. Yet, a major hurdle is to conduct laboratory experiments with seniors. The experiments are usually tiresome, the elders often too frail. Therefore, much of what we know does not come from systematic research on truly old people. To tackle this problem, a research programme is wanted that bases itself on a firm methodology that can be used to answer a range of questions and variables with a common line of reasoning. The backbone of that programme should be twofold: the use of indirect but nevertheless structured observations (1) in a switching replications design (2) in which control and robot-treatment groups swap. This approach goes beyond observations and case studies, exploiting a hybrid methodology in which observation is combined with quasi-experimental field studies.

We need two groups of seniors, one of which is split in two so there are three groups in total. Group 1 is a group that claims not to have any problems. Suppose our dependent variable is ‘loneliness’; then these people are diagnosed as not feeling lonely. Groups 2a and 2b do feel lonely, but the moment in time that they receive a robot companion differs.

Measurement happens at three points in time (so-called waves) to see how long-lasting the effect of robot intervention is. Participants are observed in their room through video surveillance for 1 h. This material will be cut back to ± 15 min. Then a trained interviewer will pose a number of open questions like ‘So, how is life; how are you doing?’ (± 10 min). The interview also is videotaped, but the interviewer is not visible on camera.

For the seniors in Group 1, the 1 hour interview pre-period is spent without robot support; for Group 2, the pre-period is spent with robot support. Participants in a session do their robot-guided treatment as they please. The advantage is that seniors can stay in their room; they do not come to the laboratory because the robot will be brought to their homes. Each person is videotaped during robot interaction (this is the interview pre-period) and during the interview.

The cut back video footages (about 25 min) of the period preceding the interview (the ‘pre-period’) and the interview itself are the stimulus materials that are viewed by a large number of observers to assess the state of mind of the senior participant, using a structured questionnaire (e.g. about loneliness, well-being, physical condition, etc.).

For senior Group 1, little has to be arranged as compared to Group 2, the treatment group, which interacts with and via a robot intervention. As said, Group 2 is split into two and enters a switching replications design: a before-after repeated-measures design with and without, in this case, robot help. The dependent measure(s) could be anything (e.g. loneliness, quality of life), but what we want to know is whether robot intervention works or not and how durable the effects are. Note, however, that all kinds of variables of interest can be measured and analyzed this way. Here is an example with well-being as dependent measure:

Group 1Well-being before (t1)No robotWell-being after (t2)No robotSustained well-being? (t3)
Group 2aWell-being before (t1)RobotWell-being after (t2)No robotSustained well-being? (t3)
Group 2bWell-being before (t1)No robotWell-being after (t2)RobotSustained well-being? (t3)

As Group 1 apparently has no problems, we have no predictions for the outcome variables. Or it should be that the level of well-being remains about constant at all time points (t1 ≈ t2 ≈ t3). It is more of a research question (RQ) rather than a hypothesis what the difference may be with Group 2a and 2b.

More specifically, Group 2a tests whether robot interference (the intervention or the treatment) has a durable effect. Group 2b tests whether having no robot works just as fine or that the robot yet has an effect even after being in crisis for a longer period of time.

It is inessential that the main effect of group (2a vs. 2b) at t3 is significant. After all, the groups 2a and 2b could end up equal balance, showing the same net results. Yet, the interaction between group (2a vs. 2b) and time (t1 vs. t2 vs. t3) of measuring well-being is crucial and should show the following pattern: in Group 2a, two outcomes may underscore the robot-inspired-well-being hypothesis. The first is the lift-off-level-out effect, where well-being at t1 < t2 ≈ t3. The second is the lift-off-boost-up effect, where well-being at t1 < t2 < t3. The second effect indicates that people in Group 2a were inspired and learned the skills to fire up their own well-being without extra help of the robot. In Group 2b, the robot-inspired-well-being hypothesis is sustained when we find t1 ≈ t2 < t3. All other outcomes will count as refutation of long-term beneficial effects of robots.

13. Finance

It takes parents about 20 years to programme a child. We call this ‘upbringing’ or ‘education’; and after that, people are still telling each other how to behave, what to do and what not (cf. culture, legislation, ethics). Moreover, it is about 4 million years ago that the first prototype humanoid Lucy was developed, found in Ethiopia, and it took the last 125,000 years to come up with a model that is now the benchmark against which we compare our social robots: today’s modern human beings.

By contrast, and if I take it broadly, social robotics is developing for about 20 years, and programming our humanoid Alice took us about 4 years, which is 1/5th of the time to raise a modern-human child.

Modern humans, then, as pinnacle of their superior intelligence, are prepared to put up 1.9 billion Euros to make clearer pictures of the stars beyond our galaxy, wonderful pictures indeed. They look like abstract modern art. But running costs of the space telescope Hubble are yet another 5.6 billion Euros plus five space shuttle missions for maintenance and overhaul. That is a sum total of about 10 billion Euros for just one machine.

The investment plan that we submitted to the national science council to create a better version of the Alice machine amounts to the sum of 3.8 million (that is a fraction of 1 billion), including VAT and including a business plan to produce 200 machines in the post-grant period that can be readily applied to hospitals, care centres and elder homes. Moreover, everything will be open source so that a complete new industry can arise based on this one investment, impacting education, coaching, hospitality, entertainment, etc.

In other words, modern man is prepared to invest maximally to solve a minimal problem (i.e. clearer pictures) and minimally to solve a maximal problem (i.e. global ageing).

In 25 years’ time, there will be two elderly in need of care against one younger caretaker. The WHO and UN foresee a worldwide trend. So are we going to educate more care professionals? Yes. Or bring in more foreigners to do it for us? Yes. But both measures do not solve the lack of sufficient hands, globally. They merely relocate the problem. Moreover, we need firefighters, a police department and school teachers as well. But to counter the hardship of an ageing world population and install a completely new industrial sector—the 3.8 million Euros to refurbish an old device will not do.

Although there is a lot of positive feedback on what social robots can achieve, and we are told to ‘keep up the good work’, nobody puts their money where their mouth is. Society misses a sense of urgency of facing a mass of old people taken care of by a handful of young. It is not just eldercare. The economy will collapse under an unproductive and overstrained population in which those who can work have to take care of their parents and grandparents. Robots not only fulfil tasks in eldercare but also in all other walks of life, freeing up the time of those who are the motor of our financial system, society and welfare.

As a result, relatively small research groups and development teams work in isolation on specialist robot functionality. Because those machines can do one thing very well but all other things not, people are not impressed. Since they are not impressed, there is no financial support to develop one robot that can do many things very well. Without integration of the many machines doing one thing well, we are not talking about a robust solution to serious societal issues. Without empowerment, we leave potential world saviours work at the level of ardent hobbyists. As human race, we thought it necessary to develop a nuclear bomb. Let us turn that zeal into something more positive and organize a Manhattan Project for social robotics and develop the Ultimate Android [29].

Social robots like Alice can be used to help mild healthcare cases, so that the heavier cases can be catered for by people. One person in a home with light care (sheltered housing, little help) costs around € 2000—a month. If we keep 200 people out of the elder home for 6 months (or 400 people for 3 months), we save € 2.4 million. For € 2.4 million, we can design the blueprint of a robot that can keep 200 people stay at their homes for 6 months longer. That € 2.4 million is a cost-effective investment, the only way to cut costs responsibly. With all the robots that the industry produces based on that blueprint, society makes nothing but profit, which can be invested into long-term care, performed by human hands. It is quite a detour to say that in about 40–50 years, we simply do not have enough hands to take care of everyone, but if politics and business invest now, we make that care cheaper and better than today. People and robots work together, and they specialize in what they are good at. The main barrier is those first few millions.

14. Conclusions

The global population is ageing, and the number of people that can provide care diminishes. This brings moral disgruntlement and shame, which are unwarranted because nobody can take care of a dozen of people and at the same time have a job, run a business, run a family and have friends. Without drastic measures, today’s shaky economic recovery will be undone. New social participation programmes in which people work part-time may be one solution. However, that will lower gross national income and puts many into a position of fulfilling an obligatory job instead of being fulfilled. Not everybody is fit to or finds satisfaction in helping others, all of the time. That will not enhance but reduce the quality of care.

In 2025, we will be with 8,141,661,000 people inhabiting the earth of which 32% will be under 20, which is a drop back of 8% compared with today. In less than 10 years’ time, relatively small younger generation has to take care of 800 million people over 65, a growth of 410 million elderly worldwide [6]. This increase is not evenly spread across the globe. In Latin America and Asia, the increase in elderly people will hit 300% (ibid.). This actually is happy news because it means that globally people live healthier and, therefore, longer. Perhaps, we just need to reproduce at a higher rate.

The planet’s population exceeds the 8 billion in 2025, and if we stop reproducing, and demographics shows we do, there will be insufficient people to take care of the older generations. But if we increase our ‘replacement fertility’, we empty the planet’s resources. And minimizing our medical and health programmes so that fewer survive beyond childhood is no option either. Instead, the trend is the provision of low-cost 24h high-quality healthcare.

The alternative is a technological solution that fills in the gaps when needed and can be disposed of once redundant, a workforce you can increase on demand and lay off just as easily, because they do not care. Technological innovations are progressing, and the basic requirements that elderly have on such systems can be provided for. The only hurdle is that technology such as social robotics is not robust and fine-tuned enough to release them to the market. All the stakeholders in the care supply chain—if done sensibly—underwrite the potential benefits of robot care except for the CEOs, the ministers and the chair of the board of directors. They wait and hold their cards close to their chests, where a warm heart should be pounding.

If employed properly, care robots bring cost savings. They also initiate a new industry. This new industry will not only be care related; innovations will radiate to other areas such as hospitality, education, sports, coaching and therapy. The only things needed are easy-access money (fuel) and regulatory respite (grease), typically things policymakers deal with for a living. Policymakers, it is your turn. Fuel and grease that robot engine because we are ready to fire up!

15. Suggestions

Seven suggestions to society (one a day) that can be encouraged by governments of organizations and nations in setting an example:

  • Leave moral outrage and fear of job loss behind.

  • Focus on the opportunities (both in health and economy).

  • Cost savings through health improvement, not by massive layoffs.

  • Provide just enough easy-access money for R&D.

  • Reduce the regulatory burden.

  • Start designing new business models.

  • Be brave.

References

  1. 1. United Nations, Department of Economic and Social Affairs, Population Division, New York. World Population Prospects: The 2012 Revision, Key Findings and Advance Tables [working paper no. ESA/P/WP.227] [Internet]. 2013. Available from: http://esa.un.org/wpp/Documentation/pdf/WPP2012_%20KEY%20FINDINGS.pdf [Accessed: 13-02-2017]
  2. 2. Haub C. Population Reference Bureau, Washington DC. World Population Aging: Clocks Illustrate Growth in Population Under Age 5 and Over Age 65 [Internet]. 2011. Available from: http://www.prb.org/Publications/Articles/2011/agingpopulationclocks.aspx [Accessed: 12-02-2017]
  3. 3. Sparrow R, Sparrow L. In the hands of machines? The future of aged care. Mind and Machine. 2006;16(2):141-161. DOI: 10.1007/s11023-006-9030-6
  4. 4. Killeen C. Loneliness: An epidemic in modern society. Journal of Advanced Nursing. 1998;28(4):762-770
  5. 5. Suzman R, Beard J. National Institute on Aging, National Institutes of Health, World Health Organization. Global Health and Ageing [NIH Publication 11-7737] [Internet]. October 2011. Available from: https://www.nia.nih.gov/sites/default/files/global_health _and_aging.pdf [Accessed: 12-02-2017]
  6. 6. Beard J, Officer A, Cassels A, et al. World Health Organization, Geneva, SW. World Report on Ageing and Health [Internet]. 2015. Available from: http://apps.who.int/iris/ bitstream/10665/186463/1/9789240694811_eng.pdf [Accessed: 13-02-2017]
  7. 7. Banks MR, Willoughby LM, Banks WA. Animal-assisted therapy and loneliness in nursing homes: Use of robotic versus living dogs. Journal of American Medical Directors Association. 2008;9(3):173-177. DOI: 10.1016/j.jamda.2007.11.007
  8. 8. Stafford RQ, McDonald BA, Li X, Broadbent E. Older people’s prior robot attitudes influence evaluations of a conversational robot. International Journal of Social Robotics. 2014;6(2):281-291. DOI: 10.1007/s12369-013-0224-9
  9. 9. Sharkey A. Robots and human dignity: A consideration of the effects of robot care on the dignity of older people. Ethics and Information Technology. 2014;16(1):63-75. DOI: 10.1007/s10676-014-9338-5
  10. 10. Broadbent E, Tamagawa R, Patience A, Knock B, Kerse N, MacDonald BA. Research attitudes towards health-care robots in a retirement village. Australasian Journal on Ageing. 2012;31(2):115-120. DOI: 10.1111/j.1741-6612.2011.00551.x
  11. 11. Zsiga K, Edelmayer G, Rumeau P, Péter O, Toth A, Fazekas G. Home care robot for socially supporting the elderly: Focus group studies in three European countries to screen user attitudes and requirements. International Journal of Rehabilitation Research. 2013;36(4):375-378. DOI: 10.1097/MRR.0b013e3283643d26
  12. 12. The Tokyo Times. Japan Develops Robot Industry for the Elderly Care [Internet]. 2013. Available from: www.tokyotimes.com/japan-develops-robot-industry-for-the-elderly-care/ [Accessed: 13-02-2017]
  13. 13. The Economist. Robot Nurses Are Big Business. From Surgery to Companionship, Japan Looks to Robots for Healthcare [Internet]. August 1, 2014. Available from: http://gelookahead.economist.com/robot-nurses-big-business/#sthash.pmNaTAyl.dpuf [Accessed: 13-02-2017]
  14. 14. Broese van Groenou MI, Jacobs MT, Zwart-Olde NE, Deeg DJH. Mixed care networks of community-dwelling older adults with physical health impairments in the Netherlands. Health and Social Care in the Community. 2016;24(1):95-104. DOI: 10.1111/hsc.12199
  15. 15. Plaisier I, Broese van Groenou MI, Keuzenkamp S. Combining work and informal care: The importance of caring organisations. Human Resource Management Journal. 2015;25(2):267-280. DOI: 10.1111/1748-8583.12048
  16. 16. Parast SM, Allaii B. The role of social work in healthcare system. Journal of Social Science for Policy Implications. 2014;2(2):59-68
  17. 17. Jacobs MT, Broese van Groenou MI, De Boer AH, Deeg DJH. Individual determinants of task division in older adults’ mixed care networks. Health and Social Care in the Community. 2014;22(1):57-66. DOI: 10.1111/hsc.12061
  18. 18. Hoffman G, Birnbaum GE, Vanunu K, Sass O, Reis HT. Robot responsiveness to human disclosure affects social impression and appeal. In: Sagerer G, Imai M, Belpaeme T, Thomaz A, editors. Proceedings of the 2014 ACM/IEEE International Conference on Human-Robot Interaction (HRI ‘14); 2014; New York. NY: ACM; 2014. pp. 1-8. DOI: 10.1145/2559636.2559660
  19. 19. En LQ, Lan SS. Applying politeness maxims in social robotics polite dialogue. In: Yanco H, Steinfeld A, editors. Proceedings of the 2012 ACM/IEEE International Conference on Human-Robot Interaction (HRI ’12); 2012; New York. NY: ACM; 2012. pp. 189-190
  20. 20. Moyle W, Cooke M, Beattie E, Jones C, Klein B, et al. Exploring the effect of companion robots on emotional expression in older adults with dementia: A pilot randomized controlled trial. Journal of Gerontological Nursing. 2013;39(5):46-53
  21. 21. Hoorn JF, Konijn EA, Germans DM, Burger S, Munneke A. The in-between machine: The unique value proposition of a robot or why we are modelling the wrong things. In: Loiseau S, Filipe J, Duval B, Van den Herik J, editors. Proceedings of the 7th International Conference on Agents and Artificial Intelligence (ICAART); Jan. 10-12, 2015; Lisbon, Portugal. Lisbon, PT: ScitePress; 2015. pp. 464-469
  22. 22. Doolaard J, Burger S. Alice Cares [Documentary]. The Netherlands: KeyDocs; 2014. Available from: https://vimeo.com/116760085 [Accessed: 12-02-2017]
  23. 23. Konijn EA, Hoorn JF. Empathy with and projecting feelings onto robots from schemas about humans. International Journal of Psychology. 2016;51(S1):837
  24. 24. Hoorn JF, Pontier MA, Siddiqui GF. Coppélius’ concoction: Similarity and complementarity among three affect-related agent models. Cognitive Systems Research. 2012;15-16:33-49. DOI: 10.1016/j.cogsys.2011.04.001
  25. 25. Bhakta R, Savin-Baden M, Tombs G. Sharing secrets with robots? In: Viteli J, Leikomaa M, editors. Proceedings of EdMedia: World Conference on Educational Media and Technology; 2014; Waynesville. NC: Association for the Advancement of Computing in Education (AACE); 2014. pp. 2295-2301
  26. 26. Stallman R. Personal Communication. January 2-25, 2016
  27. 27. Pontier MA, Widdershoven G, Hoorn JF. Moral Coppélia—Combining ratio with affect in ethical reasoning. In: Pavón J, et al, editors. Lecture Notes in Artificial Intelligence. 7637th ed. Berlin-Heidelberg, GE: Springer; 2012. pp. 442-451
  28. 28. Tanaka K, Nakanishi H, Ishiguro H. Comparing video, avatar, and robot mediated communication: Pros and cons of embodiment. In: Yuizono T, Zurita G, Baloian N, Inoue T, Ogata H, editors. Collaboration Technologies and Social Computing. Communications in Computer and Information Science. 460th ed. Berlin, Heidelberg, GE: Springer; 2014. pp. 96-110
  29. 29. Hoorn JF. The Ultimate Android [Internet]. February 11, 2016. Available from: http://www.robopop.nl/wp-content/uploads/2016/09/2016-02-11-JFHoorn-The-Ultimate-Android.pdf [Accessed: 15-02-2017]

Notes

  • http://isp.sagepub.com/content/45/1/7.short?rss=1&ssource=mfc

Written By

Johan F. Hoorn

Submitted: 24 November 2016 Reviewed: 05 June 2017 Published: 06 December 2017