Mind the responsibility gap when creating policy for AI
30th October 2018
Addressing AI issues before they become realities is crucial, yet the community fear we risk pursuing an act now, apologise later approach.
Artificial intelligence (AI) robotic systems offer increasing benefits to people and societies. Though progress is predominantly positive, the robotics community expresses concern about legal, cybersecurity and policy implications.
At the IROS conference in Madrid in October, a workshop took place in which international experts, Alejandro Zornoza, Eduard Fosch Villaronga, Fabio Bonsignorio, Geert De Cubber, Vicente Matellán Olivera and George Dekoulis addressed the legal frameworks applying to AI robotic products, cybersecurity challenges and how and where policymakers are implicated.
Co-organised by George Dekoulis the editor of the book Robotics-Legal, Ethical and Socioeconomic Impacts and IntechOpen, the program was structured around the three topics: legal, cybersecurity and policy issues but there were concerns that echoed across all discussions. In particular where there is evidence of responsibility gaps.
Alejandro Zornoza asked who is responsible for the acts of robots and should more autonomous AI be more responsible? Manufacturers should be held to account for damages caused by robots but some robots are more autonomous than others; should more responsibility lie with them because of their autonomy? Along with colleagues, Zornoza has developed a robot liability matrix which distributes liability and helps in deciding where responsibilities lie.
Where humans and robots interact, for example in cloud robotics, there are gaps in responsibility also. Eduard Fosch Villaronga qusestioned whether humans can be guilty when the robot learned new thing and acted upon it. And where there is a risk of physical damage, in public spaces, private homes or in care giving assistance, Vicente Matellan Olivera led discussions on how to assess safety and classify risks.
Dekoulis himself took on questions around the policymaking in brain computer interfaces – again challenging notions of where human and robot responsibilities should lie in an extremely integrated application.
Summing up the afternoon Dekoulis says: “I would like to thank Alejandro, Eduard, Geert, Fabio and Vicente for their presentations and the discussions we had. It was interesting to combine law, cybersecurity, industry, philosophy and future robotics with current robotic developments.”
If the fourth industrial revolution is happening, and robots are integral to it, these gaps must be addressed and regularly updated before chasms open up between legislation and the reality in which AI is operating.