Open access peer-reviewed chapter

The RoCKIn Project

Written By

Pedro U. Lima

Submitted: 11 April 2017 Reviewed: 02 June 2017 Published: 09 August 2017

DOI: 10.5772/intechopen.70011

From the Monograph

RoCKIn - Benchmarking Through Robot Competitions

Authored by

Chapter metrics overview

1,513 Chapter Downloads

View Full Metrics

Abstract

The goal of the project “Robot Competitions Kick Innovation in Cognitive Systems and Robotics” (RoCKIn), funded by the European Commission under its 7th Framework Program, has been to speed up the progress toward smarter robots through scientific competitions. Two challenges have been selected for the competitions due to their high relevance and impact on Europe’s societal and industrial needs: domestic service robots (RoCKIn@Home) and innovative robot applications in industry (RoCKIn@Work). The RoCKIn project has taken an approach to boosting scientific robot competitions in Europe by (i) specifying and designing open domain test beds for competitions targeting the two challenges; (ii) developing methods for scoring and benchmarking that allow to assess both particular subsystems as well as the integrated system; and (iii) organizing camps to build up a community of new teams, interested to participate in robot competitions. A significant number of dissemination activities on the relevance of robot competitions were carried out to promote research and education in robotics, as to researchers and lay citizens. The lessons learned during RoCKIn paved the way for a step forward in the organization and research impact of robot competitions, contributing for Europe to become a world leader in robotics research, education, and technology transfer.

Keywords

  • robotics
  • robot competitions
  • benchmarking
  • domestic robots
  • industrial robots

1. Introduction

Robot competitions have proved to be an effective instrument to foster scientific research and push the state of the art in a given field [16]. Teams participating in a competition must identify best practice solutions covering a wide range of functionalities and integrate them into practical systems. These systems have to work in realistic settings, outside of the usual laboratory conditions. The competition experience helps to transfer the applied methods and tools to successful and high‐impact real‐world applications. By participating in robot competitions, young students are attracted to science and engineering disciplines. Through competition events, the relevance of robotics research is demonstrated to citizens.

However, some limitations have emerged in the past as well‐established robot competitions matured:

  • the effort required to enter the competition grows and may present a barrier for the participation of new teams;

  • a gap between benchmarking complete systems in competitions and benchmarking subsystems in research may develop and limit the usefulness of the competition results to industry.

The goal of “Robot Competitions Kick Innovation in Cognitive Systems and Robotics” (RoCKIn) has been to speed up the progress toward smarter robots through scientific competitions. Two challenges have been selected for the competitions due to their high relevance and impact on Europe’s societal and industrial needs:

  • domestic service robots (RoCKIn@Home) and

  • innovative robot applications in industry (RoCKIn@Work).

Both challenges have been inspired by activities and their corresponding leagues in the RoboCup community [68], but RoCKIn extended them by introducing new and prevailing research topics, such as interaction with humans, and networking mobile robots with sensors and actuators spread over the environment (remotely controlled lamps, IP camera, motorized blinds home automation devices in RoCKIn@Home; drilling machine, conveyor belt factory‐mockup devices in RoCKIn@Work), in addition to specifying specific scoring and benchmark criteria and methods to assess progress.

The RoCKIn project addressed the competition limitations identified above by (i) specifying and designing open domain test beds for competitions targeting the two challenges and usable by researchers worldwide; (ii) developing methods for scoring and benchmarking through competitions that allow to assess both particular subsystems and the integrated system; and (iii) organizing camps whose main objective is to build up a community of new teams interested to participate in robot competitions (Figure 1).

Figure 1.

A view of the RoCKIn 2014 venue, showing the arenas and a press visit.

All these aspects of the project and their main outcomes are summarized in this chapter. We start by referring in Section 2 the two competition events organized during the project lifetime (January 2013–December 2015), listing good practices for the organization of scientific competitions in general. Section 3 presents the three RoCKIn camps and explains how they reached their goal of building a community of teams involved in robot competitions. Dissemination of robotics and its positive societal impacts, from education to technology that helps humans, is a crucial activity often connected to robot competitions. Section 4 explains how RoCKIn handled dissemination. The major outcomes of the project are summarized in Section 5, and they include scoring methods and metrics to evaluate the performance of robot systems, benchmarks, test beds designed as a standard reference, and rulebooks following the best practices in scientific robot competitions. The chapter closes with Section 6 on the project impact and lessons learned, as assessed by the project team but also by its external advisors. This chapter provides a work plan for future research triggered by robot competitions, including novel results on benchmarking, formal languages to describe competition rules objectively, dashboards for visualization of the robot state, and robot competitions as a cradle for open innovation.

Advertisement

2. RoCKIn competitions

Competition events were the core of RoCKIn. The test beds were developed to serve as a reference design to be used in all competitions for a given challenge, while the camps were organized to prepare new and existing teams to achieve good performances during competitions.

Within the project lifetime, two competition events took place, each of them based on the two challenges and their respective test beds:

  • RoCKIn Competition 2014, in La Cité de L’Espace, Toulouse, November 24–30, 2014: 10 teams (7 @Home, 3 @Work) and 79 participants from 6 countries.

  • RoCKIn Competition 2015, in the Portugal Pavilion, Lisbon, Portugal, November 17–23, 2015: 12 teams (9 @Home, 3 @Work) and 93 participants from 10 countries.

Organizing each of the competition events followed and improved established best practices for the organization of scientific competitions, which are listed here for future reference:

  1. setting up a Technical Committee (TC) per challenge, mostly composed of senior researchers experienced with the competitions and the specific challenges, responsible for enforcing the application of the rulebook competition rules;

  2. setting up an Organizing Committee (OC) per challenge, composed of researchers familiar with all the technical requirements and implementation details of the specific challenges, responsible to prepare the infrastructure and the whole setup in ways compatible with the rulebook requirements, as well as to report on that to provide transfer of information to organizers of upcoming events;

  3. [TC + OC] issuing the call for participation, requiring teams to submit an application consisting of a four‐pages paper (named as Team Description Paper) describing the team research approach to the challenge, as well as the hardware and software architectures of its robot system, and any evidence of performance (e.g., videos);

  4. [TC] selecting the qualified teams, from among the applicants, based on their scientific and technical merits and past competition performance;

  5. [TC] preparing/updating and delivering the final version of the rulebooks, scoring criteria, modules, and metrics for benchmarking about 4–5 months before the actual competition dates, after an open discussion period with past participants and the robotics community in general;

  6. [OC] building and setting up the competition infrastructure at the venue, including a vision‐based motion capture system (MCS) for ground‐truth data collection during benchmarking experiments, listing all data to be logged by the teams during the competitions for later benchmarking processing, and preparing USB pens to store the data of the actual runs of the team’s robot system;

  7. [OC] preparing several devices and software modules required by the competition rules (e.g., referee boxes, home automation devices and their network, factory‐mockup devices and their network, objects for perception and manipulation, visitor’s uniforms and mail packages, audio files, and lexicon), and describing their characteristics and technical specifications in a wiki page where all teams can access information, including a list of frequently asked questions and their answers, to ensure consistent replies to similar questions;

  8. [OC] establishing a schedule for the competition days and their different components (including team set up days and repeated runs of task benchmarks and functionality benchmarks);

  9. [OC] preparing human referees to follow the teams, handle referee boxes, record scores, and all the other required tasks;

  10. [OC] preparing the communication materials (brochure, leaflet, roller banners, banners, t‐shirts, merchandising, and schedule) for the media and general citizens and stakeholder (from academia and industry) visitors, and materials for teams (bags, badges, and schedule);

  11. [TC+OC] establishing the adequate number of teams awarded per competition category and preparing trophies for the competition awards;

  12. [OC] realizing the event, including the organization of visits from schools, and the availability of communicators who explain to the audience what is happening, using a simplified version of technically correct descriptions.

Best practices such as those listed above foster scientific progress (by regular rule revisions to push the challenge forward, based on feedback from participants and end‐users of the challenge scenarios), enforce technically rich approaches by the teams (by selecting them based on a team description paper) and peer monitoring (by setting up a technical committee composed of participants and other experts in the field), and enable transferring information about the competitions set up to next events (through reports prepared by the OCs), while making sure that dissemination to the general public is highly valued.

The competition scoring system was deployed so as to favor performance consistency, by taking mean values of scores over several runs, rather than picking the best run. This is more adequate for benchmarking purposes, since each run of a team is designed to have the same conditions as the other team runs, and simultaneously awards the teams that can consistently achieve good performances over several runs. A possible drawback of this approach is that the ability of robot system to adapt to unexpected situations may not be fully tested. On the other hand, teams tend to improve their performance over time as they fix problems from previous runs in new runs.

Advertisement

3. RoCKIn camps

Robot competitions need participating teams. Setting up a team to participate in a competition requires technical knowledge about the challenges, but also teamwork skills and experience on working and solving problems under pressure, as well as on preparing the team participation well before the competition dates.

RoCKIn camps were planned to build a community of teams experienced with robot competitions, and in particular with the technical details of the RoCKIn rulebooks and test bed infrastructure (e.g., interfacing the networked devices, handling the referee boxes). Simultaneously, camps acted as 1‐week school where European experts trained European students on advanced robotic topics relevant for the RoCKIn challenges, such as object recognition and manipulation, and speech understanding.

Three camps were organized as follows:

  • RoCKIn Kick‐off Camp, in Eindhoven, the Netherlands, June 28 to July 1, 2013, during RoboCup2013: 12 participants. The camp consisted of several lectures by the partners, on RoCKIn challenges and activities, covering subjects such as principles for benchmarking robotics, raising awareness and disseminating robotics research, and discussion on developing robotics through scientific competitions like RoboCup. In addition to the lectures, attendees got first‐hand experience of demo challenges, tests, and hardware and software solutions during the RoboCup@Home and RoboCup@Work practical sessions.

  • RoCKIn Camp 2014, in Rome, Italy, January 26–30, 2014: 19 teams (11 @Home, 8 @Work), corresponding to a total of 63 students and researchers from 13 countries. This camp was designed to support the preparation of (preferably new) teams to participate in RoCKIn@Home and RoCKIn@Work competitions, and featured guest lectures by Michael Zillich, Norman Hendrich, and Matthew Walter on vision‐based pattern recognition, object and people detection, object grasping and manipulation, and Human‐Robot Interaction in natural language.

  • RoCKIn Field Exercise 2015, in Peccioli, Italy, at the ECHORD++ Robotics Innovation Facility, March 18–22, 2015: 42 participants divided in 9 teams (4 @Home, 5 @Work). The Field Exercise has been designed as a follow up of the previous RoCKIn Camp 2014, where most of the RoCKIn Competition 2014 best teams displayed their progresses and all participants improved their interaction with the RoCKIn scoring and benchmarking infrastructure.

The selection of camp participants allowed two kinds of applications: team and individual. Teams had to submit a technical report on their existing or proposed technical approach to the RoCKIn challenges, while individuals had to submit a personal curriculum vitae. Selected individuals were assigned to teams.

Though the purpose of the different camps was different—as can be understood from their summary above—they were all structured similarly, i.e., including lectures by experts in particular topics and hands‐on experiments with the robots and the test beds. Mini‐competitions and awards for the best teams were created, so as to encourage team commitment and performance during the hands‐on sessions.

The 2015 Field Exercise was particularly interesting because it took place at the ECHORD++ Robotics Innovation Facility (RIF) of Peccioli, Italy, funded by the European Commission. Teams gained access to the state‐of‐the‐art ECHORD++ domestic test bed and to the RoCKIn@Work industrial test bed, and had the chance to practice and improve their performance in the task and functionality benchmarks, thus showing the portability of the industrial test bed and the ability to set up different test beds all over Europe according to the RoCKIn rules. The domestic test bed was equipped with the RoCKIn ground truth system for data gathering and allowed teams to get detailed feedback on their performance. This way, public funding is leading to a network of RIFs, including RoCKIn test beds, existing ECHORD++ RIFs and other test beds recently certified by the RockEU2 project, within the frame of the new European Robotics League (ERL), where robotics researchers can go benchmark their newly developed algorithms.

Advertisement

4. Disseminating robotics

Robot competitions have a crucial role on disseminating robotics research to the academic and industrial stakeholder communities, attracting young people for science and technology careers, and showing to lay citizens the impact of robotics technology on societal developments. Thus, dissemination activities focusing on the relevance of robot competitions had an important role in RoCKIn. These activities can be organized in three major categories as follows:

  • Presence in the web and social media: a web page regularly updated; Facebook page and Twitter account also regularly updated, especially during major project events, such as the camps and the competitions; videos summarizing the RoCKIn Camp 2014, the RoCKIn Field Exercise 2015, the RoCKIn Competitions 2014 and 2015 were produced and made available online on the RoCKIn website and RoCKIn YouTube channel; videos describing the main goals of the benchmarks involved in the RoCKIn challenges were also produced and made available on the RoCKIn website.

  • Publications, presence in major robotics conferences and workshop organization:

    • one key paper about the scoring and benchmarking methods used and the project activities was published on the IEEE Robotics & Automation Magazine [9];

    • presence in several scientific conferences, exhibitions, and industrial fairs, such as RoboCup 2013 (Eindhoven), IEEE ICRA 2013 (Karlsruhe), IEEE/RSJ IROS 2013 (Tokyo), IEEE ICAR 2013 (Montevideo), ISR/ROBOTIK 2014 @AUTOMATICA 2014 (Munich), EuRoC Challenge Design Workshop (Munich, 2014), IEEE ICRA 2014 (Hong Kong), IEEE/RSJ IROS 2014 (Chicago), INNOROBO (Lyon), IEEE/RSJ IROS 2015 (Hamburg), and ICT 2015 (Lisbon). The latter won the award for the best booth in the TRANSFORM area.

    • workshops on robot competitions, co‐organized with the euRathlon and the EuRoC projects, during the European Robotics Forums (ERFs) in Lyon (2013), Rovereto (2014), Vienna (2015), and Ljubljana (2016).

  • Event co‐location:

    • euRobotics AISBL decided to move, for the first time, the communication center of the European Robotics Week to La Cité de L’Espace and Toulouse during RoCKIn Competition 2014;

    • RoCKIn Competition 2014 satellite events: Les Journées Nationales de la Robotique Interactive—organized by LAAS/CNRS (French academic conference); Friendliness made in Midi‐Pyrénées (academia/industry networking event); Robotics EU Regions: Tell Me Who You Are (workshop on EU Robotics clusters and regions); Meetings of euRobotics Technology Topic Groups;

    • RoCKIn Competition 2015 satellite events: ROBOT2015—2nd Iberian Robotics Conference; EU Robotics Clusters Workshop (for Portuguese companies)—leading later to the setup of the Lisboa Robotics Cluster.

As part of the technical dissemination outputs of the project, two test beds were designed and built according to the rulebook open‐source specifications, being available for research visits by worldwide groups interested to benchmark their approaches:

  • RoCKIn@Home test bed at the Institute for Systems and Robotics of Instituto Superior Técnico, U. Lisboa, Portugal.

  • RoCKIn@Work test bed at Bonn‐Rhein‐Sieg University of Applied Sciences, Sankt Augustin, Germany.

Test bed details are provided in Chapters 2 and 3 of this book.

Advertisement

5. Major outcomes

An estimated number of approximately 100 participants took part in the different activities (Camps, Competitions) organized within RoCKIn’s frame. Many of them were new to robot competitions. Thus, one of the RoCKIn’s top contributions was to build a larger community of robotics researchers interested in competitions in Europe.

Novel scientific and technological results are among RoCKIn’s major outcomes:

  • Scoring methods and metrics to evaluate and compare performance of different robot systems designed to solve given challenges, both at the task and functionality levels.

  • Benchmarking methods and metrics to study the impact of functionality performance on task performance.

  • Open source design specifications for the test beds and rulebooks of each challenge, which take into consideration the scoring and benchmarking requirements, together with problems whose solution requires pushing the state of the art in robotics research.

Before we highlight the main contributions under the above three topics, a brief description of RoCKIn’s benchmarking and scoring systems is in order (detailed later in Chapter 4 of this book).

RoCKIn’s approach to benchmarking experiments is based on the definition of two separate, but interconnected, types of benchmarks:

  • Functionality Benchmarks, which evaluate the performance of hardware/software modules dedicated to single, specific functionalities in the context of experiments focused on such functionalities.

  • Task Benchmarks, which assess the performance of integrated robot systems facing complex tasks that usually require the interaction of different functionalities.

Of the two types, Functionality Benchmarks are certainly the closest to a scientific experiment. This is due to their much more controlled setting and execution. On the other side, these specific aspects of Functionality Benchmarks limit their capability of capturing all the important aspects of the overall robot performance in a systemic way. More specifically, emerging system‐level properties, such as the quality of integration between modules, cannot be assessed with Functional Benchmarks alone. For this reason, RoCKIn integrates them with Task Benchmarks.

In particular, evaluating only the performance of integrated system is interesting for the application, but it does neither allow to evaluate the single modules that are contributing to the global performance nor to point out the aspects needed to push their development forward. On the other side, the good performance of a module does not necessarily mean that it will perform well in the integrated system. For this reason, RoCKIn benchmarking targets both aspects and enables a deeper analysis of a robot system by combining system‐level and module‐level benchmarking.

System‐level and module‐level tests do not investigate the same properties of a robot. The module‐level test has the benefit of focusing only on the specific functionality that a module is devoted to, removing interferences due to the performance of other modules, which are intrinsically connected at the system level. For instance, if the grasping performance of a mobile manipulator is tested by having it autonomously navigate to the grasping position, visually identify the item to be picked up, and finally grasp it, the effectiveness of the grasping functionality is affected by the actual position where the navigation module stopped the robot, and by the precision of the vision module in retrieving the pose and shape of the item. On the other side, if the grasping benchmark is executed by placing the robot in a predefined known position and by feeding it with precise information about the item to be picked up, the final result will be almost exclusively due to the performance of the grasping module itself. The first benchmark can be considered as a “system‐level” benchmark, because it involves more than one functionality of the robot, and thus has limited worth as a benchmark of the grasping functionality. On the contrary, the latter test can assess the performance of the grasping module with minimal interference from other modules and a high repeatability: it can be classified as “module‐level” benchmark.

5.1. Scoring methods and metrics to evaluate robot systems performance

The scoring framework for performance evaluation of robot systems in the Task Benchmarks of the RoCKIn@Home and RoCKIn@Work competitions is the same for all Task Benchmarks, and it is based on the concept of performance classes used for the ranking of robot performance in a specific task.

The performance class that a robot is assigned is determined by the number of achievements (or goals) that the robot reaches during its execution of the task. Within each class (i.e., a performance equivalence class), ranking is defined according to the number of penalties assigned to the robot. These are assigned to robots that, in the process of executing the assigned task, make one or more of the errors defined by a task‐specific list associated with the Task Benchmark.

Performance classes and penalties for a Task Benchmark are task‐specific, but they are grouped for all tasks according to three sets as follows:

  • set of disqualifying behaviors, i.e., things that the robot must not do;

  • set of achievements (also called goals), i.e., things that the robot should do;

  • set of penalizing behaviors, i.e., things that the robot should not do.

One key property of this scoring system is that a robot that executes the required task completely will always be placed into a higher performance class than a robot that executes the task partially. In fact, penalties do not change the performance class assigned to a robot and only influence intra‐class ranking.

It is not possible to define a single scoring framework for all Functionality Benchmarks, as for Task Benchmarks. These are specialized benchmarks, tightly focused on a single functionality, assessing how it operates and not (or not only) the final result of its operation. As a consequence, scoring mechanisms for Functionality Benchmarks cannot ignore how the functionality operates, and metrics are strictly connected to the features of the functionality. For this reason, different from what has been done for Task Benchmarks scoring methodologies and metrics are defined separately for each Functionality Benchmark of a competition.

In RoCKIn, Functionality Benchmarks are defined by four elements as follows:

  • Description: a high level, general, description of the functionality.

  • Input/output: the information available to the module implementing the functionality when executed, and the expected outcome.

  • Benchmarking data: the data needed to perform the evaluation of the performance of the functional module.

  • Metrics: algorithms to process benchmarking data in an objective way.

5.2. Benchmarking methods

The availability of both task and functionality rankings opens the way for the quantitative analysis of the importance of single functionalities in performing complex tasks. This is an innovative aspect triggered by the RoCKIn approach to competitions.

To state the importance of a functionality in performing a given task, RoCKIn borrows the concept of Shapley value from Game theory. Let us assume that a coalition of players (functionalities in the RoCKIn context) cooperates and obtains a certain overall gain from that cooperation (the Task Benchmark scoring in the RoCKIn context). Since some players may contribute more to the coalition than others or may possess different bargaining power (e.g., threatening to destroy the whole surplus), our goal is to calculate adequately how important is each functionality to the reach a given performance in a Task Benchmark.

5.3. Rulebooks, test beds, and datasets

The RoCKIn@Home test bed (see Figure 2) consists of the environment in which the competitions took place, including all the objects and artifacts in the environment, and the equipment brought into the environment for benchmarking purposes. An aspect that is comparatively new in robot competitions is that RoCKIn@Home is, to the best of our knowledge, the first open competition targeting an environment with ambient intelligence, i.e., the environment is equipped with networked electronic devices (lamps, motorized blinds, and IP cams) the robot can communicate and interact with, and which enables the robot to exert control over certain environment artifacts.

Figure 2.

RoCKIn@Home test bed, including the trusses for the MCS cameras on the right.

The RoCKIn@Home rulebook specifies in detail:

  • The environment structure and properties (e.g., spatial arrangement, dimensions, and walls).

  • Task‐relevant objects in the environment, divide in three classes:

    • Navigation‐relevant objects: objects that have extent in physical space and do (or may) intersect (in 3D) with the robot’s navigation space, and which must be avoided by the robots.

    • Manipulation‐relevant objects: objects that the robot may have manipulative interactions (e.g., touching, grasping, lifting, holding, pushing, and pulling).

    • Perception‐relevant objects: objects that the robot must be able to perceive (in the sense of detecting the object by classifying it into a class, e.g., a can; recognizing the object as a particular instance of that class, e.g., a 7up can; and localizing the object pose in a predetermined environment reference frame).

During the benchmark runs executed in the test bed, a human referee enforces the rules. This referee must have a way to transmit his decisions to the robot, and receive some progress information, during the run and without interacting with the robot. To achieve this in a practical way, an assistant referee is seated at a computer and communicates verbally with the main referee. The assistant referee uses the main Referee and the Scoring and Benchmarking Box (RSBB). Besides basic starting and stopping functionality, the RSBB is also designed to receive scoring input and provide fine‐grained benchmark control for functionality benchmarks that require so. In the future, it will be developed to provide also information to the public and the team about the evolution of the robot during the task.

The RoCKIn@Work test bed (see Figure 3) consists of the environment in which the competitions took place (the RoCKIn’N’RoLLIn medium‐sized factory, specialized in production of small‐ to medium‐sized lots of mechanical parts and assembled mechatronic products, integrating incoming shipments of damaged or unwanted product and raw material in its production line), including all the objects and artifacts in the environment, and the equipment brought into the environment for benchmarking purposes. An aspect that is comparatively new in robot competitions is that RoCKIn@Work is, to the best of our knowledge, the first industry‐oriented robot competition targeting an environment with ambient intelligence, i.e., the environment is equipped with networked electronic devices (e.g., a drilling machine, a conveyor belt, a force‐fitting machine, and a quality control camera), the robot can communicate and interact with, and which allow the robot to exert control over certain environment artifacts like conveyor belts or machines.

Figure 3.

RoCKIn@Work test bed, including the trusses for the MCS cameras on the right.

The RoCKIn@Work rulebook specifies in detail:

  • The environment structure and properties (e.g., spatial arrangement, dimensions, and walls).

  • Typical factory objects in the environment to manipulate and to recognize.

The main idea of the RoCKIn@Work test bed software infrastructure is to have a central server‐like hub (the RoCKIn@Work Central Factory Hub (CFH), equivalent to the RoCKIn@Home RSBB) that serves all the services that are needed for executing and scoring tasks and successfully realize the competition. This hub is derived from software systems well known in industrial business (e.g., SAP). It provides the robots with information regarding the specific tasks and tracks the production process as well as stock and logistics information of the RoCKIn’N’RoLLIn factory. It is a plug‐in‐driven software system. Each plug‐in is responsible for a specific task, functionality, or other benchmarking module.

Both RoCKIn test beds include benchmarking equipment. RoCKIn benchmarking is based on the processing of data collected in two ways:

  • internal benchmarking data, collected by the robot system under test;

  • external benchmarking data, collected by the equipment embedded into the test bed.

External benchmarking data are generated by the RoCKIn test bed with a multitude of methods, depending on their nature. One of the types of external benchmarking data used by RoCKIn is pose data about robots and/or their constituent parts. To acquire these, RoCKIn uses a camera‐based commercial motion capture system (MCS), composed of dedicated hardware and software. Benchmarking data have the form of a time series of poses of rigid elements of the robot (such as the base or the wrist). Once generated by the MCS system, pose data are acquired and logged by a customized external software system based on robot operating system (ROS).

Pose data are especially significant because it is used for multiple benchmarks. There are other types of external benchmarking data that RoCKIn acquires, however, these are usually collected using devices that are specific to the benchmark.

Finally, equipment to collect external benchmarking data includes any server which is part of the test bed and that the robot subjected to a benchmark has to access as part of the benchmark. Communication between servers and robot is performed via the test bed’s own wireless network.

Advertisement

6. Impact and future expectations

RoCKIn’s impact on the upcoming years is expected to be mostly supported by the scoring and benchmarking methods, as well as the test bed specifications, developed during the project lifetime, as they progressively (fully or partially) migrate to new European (European Robotics League[10]) and worldwide (RoboCup [6]) robot competitions. The research on benchmarking robot systems is also expected to be boosted by the introduced methods, as well as by exploiting RoCKIn major outcomes: the RoCKIn rulebooks, test beds, and datasets.

We have asked members of our Advisory Board on the potential impact of RoCKIn in several directions. The following are some of the important remarks, based on each advisor background, experience, and/or role in the scientific/industrial community:

  • Prospects are bright for the preparation of a proposal for a European Robotics League, aimed at designing robot competitions as benchmarking experiments, where scoring methods encourage reproducibility and repeatability of experiments and provide methods to measure performance (e.g., error between actual outputs vs. ground‐truth), while keeping the excitement of addressing a challenge and of competing with other teams to achieve the best solution to a common problem. Rules should foster developing functionalities that can be used and combined in the tasks. Algorithm and code‐sharing repository of software modules per challenge should be developed.1 Datasets from the competitions should be made available, while teams should be encouraged to log their runs and provide their datasets.

  • RoCKIn@Work is very interesting to promote the evolution from a traditional industrial robotics to an emerging service robotics in manufacturing scenarios, where robots move, share the space and tasks with humans. This scenario has a huge potential in SMEs (thousands of companies all around Europe) but two aspects need to be addressed, the cost and the flexibility (e.g., regarding setup, easy programing methods, adaptability to changes at the product or at the production). The use of a perception system to detect objects including location is very interesting as a key functionality to achieve flexibility in manufacturing with robots. Perception is also a powerful tool to compensate the lack of accuracy of less specific robots and grippers.

  • There is a fundamental gap between academic robotics research and robotics applications. A high technology readiness level can only be achieved for the latter if complete robotic system architectures are developed and evaluated in a strongly system‐oriented manner. However, the involved efforts needed are often left out in academic research, although they are a prerequisite for transferring research results to real robotics applications. Robotic competitions can fill this gap, as they give reward to participants’ efforts for systems development through good results. RoCKIn methodologies for robot systems integration from a scientific viewpoint enable more systematic benchmarking in competitions for intelligent robots and for transferring a rather “hands‐on” way of organizing robot competitions to a more system‐oriented research methodology. This pushes academic research toward methodologies for integration from a scientific standpoint.

  • RoCKIn has also laid the basis for a yet not well‐addressed aspect of future intelligent personal robots in industrial and home environments: standardization and certification. Meeting such standards with their robots, European robot manufacturers will be enabled to much better promote their high‐tech robot developments in competition with other vendors on the international market.

The European Robotics League (ERL), whose foundations were laid out during discussions that took place during RoCKIn, started in early 2016 and aims to become a sustainable distributed format (i.e., not a single big event) which is similar to the format of the European Football Champions League, where the role of national leagues is played by existing local test beds (e.g., the RoCKIn test beds, but also the ECHORD++ RIFs), used as meeting points for “matches” where one or more teams visit the home team for a small tournament. This format exploits also arenas temporarily available during major competitions in Europe allowing the realization of larger events with more teams. According to this new format, teams could get “performance points” in a given challenge for each tournament they participate to, and they get ranked based on points accumulated over the year. Teams are encouraged to arrive 1–2 weeks before the actual competition/event so to participate in integration weeks where the hosting institution provides technical support on using the local infrastructure (referee boxes, data acquisition, and logging facility, etc.). Local tournaments take place in currently available test beds. Major tournaments are part of RoboCup and other similar events. The ERL has provided a certification process to assess any new candidate test beds as RIFs for both challenges, based on the rulebook specifications and the implementation of the proper benchmarking and scoring procedures. This will enable the creation of a network of European robotics test beds having the specific purpose of benchmarking domestic robots, innovative industrial robotics applications, and factory of the future scenarios.

The pool of ideas to extend and exploit RoCKIn scientific and technological results is large and exciting. We will list here the most relevant ones that came out from the 3 years of the RoCKIn experience, from the consortium members, the Experts Board members, and from Robotics researchers in general:

  • The benchmarking infrastructure, both software and hardware, was an impressive and distinctive feature of RoCKIn with respect to other existing robot competitions and challenges. A standardized, and preferably low cost, hardware infrastructure with open source software for automated measurements and dataset dissemination, and guidelines for equipment and software set up, to be used as a reference for other competitions and research laboratories, is the way forward.

  • Assess robustness in performance scoring—among other examples, the ability to deal with WLAN failures (or reduced bandwidth, or big latency) should be one of the aspects that is tested, since this is essential to real autonomy and deployability (namely in home scenarios), possibly penalizing excessive use of bandwidth.

  • Advance toward the introduction of the semantic level, using semantic tags, i.e., all data should be accompanied by semantic meta data that described the intention of the robot actions, as well as the progress that the robot is making in this intention, at least according to what its own executor process assesses as progress, including the logging of the associated tolerances regarding the error of what the robot accomplishes with respect to the desired goal(s).

  • In the scoring system, trace steps toward a better balance between human’s judgment and satisfaction as the ultimate goal, and indicators that can be objectively measured, possibly including additional user‐oriented metrics like acceptability, usability, or perceived utility.

  • Develop (graphical) user interfaces and providing real‐time data to fill the slots of a dashboard displaying information to the attending public, e.g., information about the state of the robot actions such as grasping an object and whether the robot thinks it actually has successfully grasped it—this will force the teams to monitor and diagnose the performance of their robot systems and not only producing and storing data.

  • Use the RoCKIn approach as a playground for open innovation, where several teams contribute with components that need to be integrated in a “standardized” manner to build up a successful “mixed team”—domotics companies, Internet of Things research groups, care technology providers should be targeted and challenged to provide infrastructure and/or components.

  • Start a community effort to develop a formal language to describe robotic scenarios, robotic tasks, and robotic benchmarks, with the goal of reducing the size and increasing the objectivity of the rulebooks, so as to describe domains and tasks in a compact but nonambiguous way.

  • Provide the challenge rules with different levels of difficulty, so as to enable teams with different expertise levels to enter the competitions, e.g., encouraging undergraduate as well as PhD students and researchers from companies.

  • Enforce the usage of computer vision and computer graphics (which are emerging and trending topics in industry) in some parts of the rules, e.g., favoring visual localization and mapping.

  • Bring into play issues such as safety and privacy protection for robots working with aged people at home.

Advertisement

Project information and contacts

The RoCKIn consortium is composed of the following partners:

  • Instituto Superior Técnico, project coordinator

  • Università di Roma “La Sapienza”

  • Hochschule Bonn‐Rhein‐Sieg

  • KUKA Roboter GmbH

  • Politecnico di Milano

  • InnoCentive

Advisory Board Members:

  • Adam Jacoff, NIST, USA

  • Bill Smart, Oregon State University, USA

  • Bruno Siciliano, University of Naples Federico II, Italy

  • Jon Agirre Ibarbia, Tecnalia, Spain

  • Manuela Veloso, Carnegie‐Mellon University, USA

  • Oskar von Stryk, Technical University of Darmstadt, Germany

  • XiaoPing Chen, University of Science and Technology of China, China

Experts board (reports on the competition events):

  • Alessandro Saffiotti, Örebro University, Sweden

  • Herman Bruyninckx, University of Leuven, Belgium

  • Tijn van der Zant, University of Groningen, The Netherlands

RoCKIn contacts:

References

  1. 1. RoCKIn Project. Project Website [Internet]. 2014. Available from: http://rockinrobotchallenge.eu/ [Accessed: May 26, 2017]
  2. 2. DARPA. DARPA Robotics Challenge [Internet]. 2015. Available from: http://archive.darpa.mil/roboticschallenge/ [Accessed: May 26, 2017]
  3. 3. Behnke S. Robot competitions—Ideal benchmarks for robotics research. In: Proceeding IROS Workshop Benchmarks Robotics Research; 9-15 October 2006; Beijing, China. 2006
  4. 4. Bonasso P, Dean T. A retrospective of the AAAI robot competitions. AI Magazine. 1997;18(1):11-23
  5. 5. Bräunl T. Research relevance of mobile robot competitions. IEEE Robotics and Automation Magazine. 1999;6(4):32-37
  6. 6. The RoboCup Federation. RoboCup [Internet]. 2016. Available from: http://www.robocup.prg [Accessed: May 26, 2017]
  7. 7. Holz D, Iocchi L, van der Zant T. Benchmarking intelligent service robots through scientific competitions: The RoboCup@Home approach. In: Proceeding AAAI Spring Symposium. Designing Intelligent Robots: Reintegrating AI II; 25-27 March 2013; Palo Alto, California, USA. 2013. pp. 27-32
  8. 8. Kraetzchmar G, Hochgeschwender N, Nowak W, Hegger F, Schneider S, Dwiputra R, Berghofer J, Bischoff R. RoboCup@Work: Competing for the factory of the future. In: Proceeding of RoboCup2014 Symposium; 25 July 2015; João Pessoa, Brazil. Springer; 2015
  9. 9. Amigoni F, Bastianelli E, Berghofer J, Bonarini A, Fontana G, Hochgeschwender N, Iocchi L, Kraetzschmar G, Lima P, Matteucci M, Miraldo P, Nardi D, Schiaffonati V. Competitions for benchmarking: Task and functionality scoring complete performance assessment. IEEE Robotics & Automation Magazine. 2015;22(3):53-61. DOI: 10.1109/MRA.2015.2448871
  10. 10. euRobotics. European Robotics League [Internet]. 2016. Available from: https://eu‐robotics.net/robotics_league/ [Accessed: May 26, 2017]

Notes

  • Teams are requested to make their code public on a voluntary basis, since there may exist situations, e.g., if they are using third party software or protected arts, where this may not be adequate.

Written By

Pedro U. Lima

Submitted: 11 April 2017 Reviewed: 02 June 2017 Published: 09 August 2017