Legal issues, Cybersecurity and Policy in AI robotics

Interview with Prof. George Dekoulis

September 13, 2018

Ahead of the IROS Conference which takes place on 1-5 October in Madrid, we sat down with Prof. George Dekoulis who is hosting a workshop there on legal, cyber security and policy implications in AI robotics. These topics are covered in a recently published open access book which he edited, and Professor Dekoulis outlines the issues in the interview below.

Six IntechOpen authors will be participating in this workshop and some of our team will be there. Contact if you will be at the conference and want to meet up or attend the workshop.

How are robot designers and researchers taking societal implications into account when doing their work?

The first and largest group of robot designers are amateur hobbyist designers who design robots for fun or for a specific purpose that intrigues them. Some hold university qualifications in a relevant discipline but not necessarily. They are aware of the societal implications of their end design. They raise and publicise their concerns regarding the operation of advanced deep-learning AI robotic products. The way these people have been protesting against several robotic products, I believe, has been heard by both manufacturers and policymaking authorities. This is an indirect method of guiding the corresponding legislation authorities towards human oriented and ethical laws. This is the duty of every citizen in a democratic country. It is fully acceptable.

The second category involves researchers and academicians. These are skilled individuals that envision advanced research products for the public benefit. In my opinion, these researchers do not deviate from logical ethical standards, despite the fact they are focused more on the actual research and development of state-of-the-art robotic research products.

Lastly, there is industry. The public sector, on a global scale, is constantly shrinking, therefore our discussion mainly concerns the private sector. In order to sustain a successful company you need to have unique products, ideally not available by other competitors. Therefore, these companies rely on materialising brilliant ideas into unique innovative products. And this is the point where the thin ethical line may be crossed. We have seen various robotic products that caused major societal implications. We will see in the future more products that will directly affect the behaviour and everyday life of the eager buyers.

It is the buyers who determine whether a robotic product is successful or not. As adults, we are constantly developing ethical standards through education and personal experiences. I may personally refer to a robotic product as being unethical and it may also be a best-seller.

It is the responsibility of the policymaking authorities to legislate austere measures against the development of unethical robotic products that have the potential to raise societal implications. These laws will put an extra pressure on the manufacturers to rethink pursuing the development of prospectively unethical products. This has to be implemented despite market demand and tendencies of the eager end-users.

Do you have any research that speaks to any of policy or ethical issues in robotics?

I have been working as a professor in space engineering for years, active in the research and development of state-of-the-art reconfigurable space systems. Most of my publications are in the area of designing different types of space instrumentation. I wasn't aware of the significance of the term RoboEthics, because in my specialization it is diamond clear that autonomous systems are developed to be employed in places where no humans are supposed to be i.e. space. In cases where astronauts visit space, all systems are double checked for their flawless operation. All systems I'm referring to have previously been space proven. Safety comes first. Therefore, in my science all robotic equipment are assisting the workforce, when astronauts are involved. Otherwise, they are being built to perform functions that no human is capable of doing.

In 2017, I was given the unique opportunity by IntechOpen to be the editor of a book called Robotics: Legal, Ethical and Socioeconomic Impacts.

What surprised me when reviewing the various book chapters is the lack of efficient legislation at a European level to adequately regulate the development of deep-learning AI robotic products. This gives the impression that the legislation follows technological evolution, and when necessary, additional measures are enforced. It should be the other way round.

A team of experts in a pan-European level should first set austere standards and the industry should follow the guidelines and regulations. The amount of permissible level of AI learning should be regulated. If a deep-learning AI robot continues learning after sales it should be closely monitored, preferably by the manufacturer. These products behave like us and we behave like them, so all legal aspects regarding the operation of these devices should be regulated in advance.

I will give praise to the robotic community, as defined in the first question, for raising publicly the issue of cybersecurity in autonomous vehicles. We have all watched videos on how easy it is to hack and take control of an autonomous vehicle. Do I have to mention that the common communications interface buses on autonomous vehicles have to be modified in order to be secure proof. A lot of human effort is required to guarantee cybersecurity.

All these points and many more topics have been addressed by our new book on RoboEthics and will be discussed in the workshop in Madrid. It is an honour to have been involved in the process. I have changed my way of thinking towards AI robotic products after working on this book project.

Prof. George Dekoulis

Head of Aeronautical and Space Engineering Department

Aerospace Engineering Institute (AEI), Nicosia, Cyprus