There is no difference between a deep learning humanoid robot and the finest of humans. They are simply our reflection.
Would you have entered this Boeing 737 that just lands? The robotic co-pilot Aircrew Labour In-cockpit Automation System (ALIAS) has been used in simulation mode to fully fly and land a Boeing 737, as shown in Figure 1. The project was developed by the Defense Advanced Research Projects Agency (DARPA).
All of us have entered an airplane before. An airplane is a robot mimicking a bird. In the same way, a humanoid robot mimics a human.
As the available scientific processing power is constantly increasing, we will constantly be learning about or using advanced products with large memory modules and artificial intelligence (AI) capabilities. Therefore, it is highly possible in the future to also enter a semi-autonomous, as in Figure 1, or a fully-autonomous airplane too.
The usage of highly intelligent products that continue learning, while we are using them, exhibit both positive and negative features. Every manufacturer considers its products as being totally positive. Through usage it is the users that usually realise the drawbacks of an AI product. This represents the social impact aspect of using AI robotic products. Therefore, the need for introducing a new higher-level field of science would evaluate the virtues of AI robotic products as early as possible in the design process. This recently developed field of science is called Robot Ethics or Roboethics in its shortened version. Currently, the field of AI robotics is broadly considered to be unlegislated. However, regulatory bodies, such as the EU, are trying to develop legal directives in order to gradually proceed to the full legislation of AI robotic products.
The first step in solving a problem is to understand the problem. In order to understand the magnitude and significance of the problem, bear in mind that currently is impossible even for the manufacturers to estimate the current state of consciousness and knowledge of a deep learning AI robot that a customer operates.
2. Aim of the book and organisation
The legal, ethical and social aspects of using AI robots represent the aim of this book. The collective effort of distinguished international researchers has been incorporated into one textbook suitable for the broader audience interested into the scientific field of Roboethics.
Chapters 1 and 2 represent the results of recent scientific work on Robotics.
Chapter 1 presents the emerging topic of automated risk assessment in a domestic scene. It focuses on safer human and robotic interactions within a given environment. Hazards are identified and the risk factor is quantified.
Chapter 2 presents a solution to the problem of measuring the trajectory tracking error using fractional-order PID control law for two-link robot manipulator. The results exhibit a satisfactory performance of the fractional-order dynamical neural network with online learning.
Chapters 3 and 4 report the significant progress achieved in the field of legislating the operation of AI Robotic systems.
Chapter 3 presents a system of distribution of responsibility of damages caused by robots. Legal and ethical aspects of robotics are presented. The European Parliament adopted the Report with recommendations to the Commission on Civil Law Rules on Robotics (2015/2103(INL)) on February 2017. The liability level ranges according to the robot ́s learning capability and the knowledge learned from its owner. A responsibility setting matrix is proposed by the authors.
Chapter 4 discusses the emerging topic of cyber-security of Robotics and autonomous systems in terms of privacy and safety. A survey on cyber-security attacks associated to service robots is presented. A taxonomy is presented that classifies the risks faced by the users when using robots. A robot software development phase is finally presented for preventing unauthorised access to service robots.
Chapter 5 discusses some of the ethical impacts related to the usage of service robots. A case study on cultural heritage is presented.
Chapters 6 and 7 analyse major socioeconomic impacts that arise from the usage of AI robotic systems.
Chapter 6 analyses the social impacts that arise through the usage of healthcare robots. Emphasis is given to the implementation of care robots as a direct solution to the Global Ageing problem. Immediate governmental support is required in terms of both allocating funds for R&D and regulating the legal frame regarding care robots.
Chapter 7 analyses the positive impacts to the healthcare system that can be achieved, if reliable electronic prescribing and robotic dispensing techniques are applied into hospital pharmacies. The efficiency, safety, professionalism and reliability of the hospital pharmacies would greatly be increased. More patient focused activities can therefore be developed.
Chapter 8 is the Epilogue of this book: looking into the mirror
Chapter 8 presents an interesting, philosophical and controversial discussion to the topics covered so far. It is looking at the topics covered so far through the mirror. We have put so much effort in developing humanoid robots, i.e. AI robots that resemble and act like humans. Why not simulate and replicate human behaviour through computer simulations. If the results match human behaviour, we would understand ourselves and the construction of human communities. Such simulated results of human robots and societies are presented. Some of the problems that human robots will pose to human beings are discussed.
We hope this book will increase the sensitivity of all the community members involved with Roboethics. The significance of incorporating all aspects of Roboethics right at the beginning of the creation of a new deep learning AI robot is emphasised and analysed throughout the book. AI robotic systems offer an unprecedented set of virtues to the society. However, the principles of Roboethical design and operation of deep learning AI robots must be strictly legislated, the manufacturers should apply the laws and the knowledge development of the AI robots should be closely monitored after sales. This will minimise the drawbacks of implementing such intelligent technological solutions. These devices are a representation of ourselves and form communities like us. Learning from them is a way to improve ourselves.