Open access peer-reviewed chapter

System Level Design and Conception of a System-on-a-Chip (SoC) for Cognitive Robotics

By Diego Stéfano Fonseca Ferreira, Augusto Loureiro da Costa, Wagner Luiz Alves De Oliveira and Alejandro Rafael Garcia Ramirez

Submitted: December 4th 2020Reviewed: May 31st 2021Published: July 16th 2021

DOI: 10.5772/intechopen.98643

Downloaded: 62

Abstract

In this work, a system level design and conception of a System-on-a-Chip (SoC) for the execution of cognitive agents in robotics will be presented. The cognitive model of the Concurrent Autonomous Agent (CAA), which was already successfully applied in several robotics applications, is used as a reference for the development of the hardware architecture. This cognitive model comprises three levels that run concurrently, namely the reactive level (perception-action cycle that executes predefined behaviours), the instinctive level (receives goals from cognitive level and uses a knowledge based system for selecting behaviours in the reactive level) and the cognitive level (planning). For the development of such system level hardware model, the C++ library SystemC with Transaction Level Modelling (TLM) 2.0 will be used. A system model of a module that executes a knowledge based system is presented, followed by a system level description of a processor dedicated to the execution of the Graphplan planning algorithm. The buses interconnecting these modules are modelled by the TLM generic payload. Results from simulated experiments with complex knowledge bases for solving planning problems in different robotics contexts demonstrate the correctness of the proposed architecture. Finally, a discussion on performance gains takes place in the end.

Keywords

  • Autonomous Agents
  • Robotics
  • Hardware Design
  • Knowledge Based Systems
  • Transaction Level Modelling

1. Introduction

Behaviour-based robotics is a branch of robotics that studies techniques for the interaction of robotic agents with the environment using the perception-action cycle in a coordinated fashion. With the addition of cognition, these agents may use knowledge about the environment to perform more complex tasks [1, 2, 3]. In the context of artificial intelligence, the internal structure of those agents, i. e., their cognitive architectures, dictate how the problem-solving will take place [4].

An example of a cognitive architecture with successful applications in robotics is the Concurrent Autonomous Agent (CAA), an autonomous agent architecture for mobile robots that has already proven to be very powerful [5, 6, 7]. This agent possesses a three layer architecture, in which each layer is responsible for a different task: the reactive layer runs behaviours with a perception-action cycle; the instinctive layer coordinates reactive behaviour selection; and the cognitive layer does the high-level planning.

In this work, a system level hardware model of a System-on-a-Chip (SoC) for cognitive agents will be presented. This model was inspired by the cognitive architecture of the CAA. Therefore, the CAA will be described in Section 2. In Sections 3 and 4 the Reteand Graphplanalgorithms are described, respectively, since they are at the core of the CAA. The SystemC and TLM 2.0 standards, the tools used to construct the models are presented in Section 5, followed by the presentation of the proposed architecture in Section 6. Results of experiments are shown in Section 7 and some final thoughts and conclusions are presented in Section 8.

Advertisement

2. The concurrent autonomous agent (CAA)

The Concurrent Autonomous Agent (CAA) is a cognitive architecture whose taxonomy is based on the generic model for cognitive agents, which is composed by the reactive, the instinctive and the cognitive levels [8]. The CAA levels are presented in Figure 1 [5, 6], where the message passing between the levels is shown. The cognitive level generates plans that are executed by the instinctive level through the selection of reactive behaviours in the reactive level. [6].

Figure 1.

Cocurrent autonomous agent architecture.

Both the cognitive and instinctive levels apply a Knowledge Based System (KBS) for knowledge representation and inference. The KBS is composed by a facts base, a rules base and an inference engine, as shown in Figure 2 [6].

Figure 2.

KBS used by AAC.

The facts base contains atomic logical elements that represents knowledge that is known about the current state of the environment. The rules base contains a set of rules in the formatif PREMISE then CONSEQUENT. The premise consists of a conjunction of ungrounded fact patterns that uses variables to increase expressiveness. The consequent, in turn, has instructions on how to modify the facts base and which message should be sent to other levels, if any. The KBS then goes through the following cycle [9].

  • Recognition: identify which rules can be activated by checking if the premises matches the facts in the facts base;

  • Conflict Resolution: among the activated rules (conflict resolution set), decide which should be executed; and

  • Execution: the chosen rule in the conflict resolution phase has its consequent executed.

The Retematching algorithm is applied in the recognition phase to generate the conflict resolution set. The instinctive level uses its KBS to select the appropriate reactive behaviour to be selected given the current world state. The cognitive level, in turn, uses its KBS inside the Graphplanalgorithm (that will be described later in this chapter), in the state space expansion stage [10].

3. The Retealgorithm

As mentioned earlier in this chapter, the Retematching algorithm is employed in the recognition stage the KBSs used by the CAA. It is proposed in [11], and is named after the latin word for “network”.

The algorithm builds a graph out of the rules base of the KBS where each node has a special purpose. In the end, it avoids running through the entire facts base for each rule premise, every time a new fact arrives, by saving information about partial matches in some of its nodes [11].

The constructed graph has two portions: the alpha network, responsible for comparing the constants in the premises with the corresponding fields in the incoming fact; and the beta network, which checks for variable assignment consistency and maintenance of partial matches [10].

The nodes the compose the alpha network are the following [10]:

  • Root Node: entry point for new facts;

  • Constant Test Nodes(CTN): compares constant fields of premises with corresponding ones in the current fact; and

  • Alpha Memories(AM): stores facts that successfully passed the tests in CTNs.

The beta network is composed by the following nodes [10]:

  • Join Nodes(JN): perform tests that ensure variable assignment consistency inside a premise instance (partial match);

  • Beta Memories(BM): stores partial matches produced in JNs; and

  • Production Nodes: terminal nodes for full rule matches.

4. The Graphplanalgorithm

The cognitive level uses the Graphplanalgorithm to generate the plans that the other levels should execute. Originally, the algorithm used a propositional knowledge representation, so this will be adopted here for the algorithm description. The rest of this section uses [12, 13] as references.

Mathematically, a planning problem may be stated as P=Σsjg, where Σ=SAγis the problem domain (that comprises the set of states S, the set of actions Aand a state transformation function γ=S×AS), sjis the initial state and gis the goal state.

Each action aAhas a set prencondaof precondition propositions and a set effectsa=effects+aeffectsaof effects. The effects, in turn, may be broken down into two subsets: effects+a, the set of positive propositions (propositions to be added), and effectsa, the set of negative propositions (propositions to be deleted). The applicability condition for an action a, in a given state s, may be written as precondas. The new state produced by the application of awould be γsa=seffectsaeffects+a.

Consider an action layer Ajand the propositional layer Pj1preceding it. Ajcontains all actions asuch that precondaPj1, and Pj1all propositions psuch that pPj1. The so called planning graph is the built by connecting elements in Pj1to elements in Ajby edges:

  • edges connecting a proposition pPj1to an action aAj;

  • edges connecting an action aAjto a proposition pPj1, such that peffects+a(positive arc); and

  • edges connecting an action aAjto a proposition pPj1, such that peffectsa(negative arc).

If two actions a1,a2Ajobey effectsa1preconda2effects+a2=and effectsa2preconda1effects+a1=, they a said to be independent; if not, they are dependent, or mutually exclusive(mutex).

Propositions can also be mutex: pand qare mutexif every action in Ajthat adds pis mutexwith every action in Ajthat produces q, and there are no actions in Ajthat adds both pand q. Also, if a precondition of an action is mutexwith a precondition of another action, the actions are mutex.

The algorithm begins by expanding the graph. The pseudo-code for the expansion step is given in Algorithm 1.

Algorithm 1Planning graph expansion

1: procedureExpand (si) ▷si: i-th state layer

2: Ai+1KBS.InferenceCyclesiAA: action profiles

3: si+1Ai+1.effects+

4: μAi+1abAi+12abDependentabpqμsi:pprecondsaqprecondsb

5: μsi+1pqsi+12pqabAi+12:peffects+aqeffects+babμAi+1

6: end procedure

The expansion stops when the goal state gis detected in the state layer si. It then triggers a recursive search for non-mutexactions in all the preceding action layers that could have produced the goal state found in si. This procedure is composed by the functions Search (Algorithm 2) and Extract (Algorithm 3).

Algorithm 2Search for non-mutexactions.

1: procedureSearch(g, πi, i)

2: ifg=then

3: ΠExtractprecondsaaπii1

4: ifΠ=Failurethen

5: returnFailure

6: end if

7: returnΠ.πi

8: else

9: select any pg

10: resolversaAipeffects+abπi:abμAi

11: ifresolvers=then

12: returnFailure

13: end if

14: nondeterministically choose aresolvers

15: returnSearch(geffects+a, πia, i)

16: end if

17: end procedure

Algorithm 3Extract a plan.

1: procedureExtract(g, i)

2: ifi=0then

3: return

4: end if

5: πiSearchgi

6: ifπiFailurethen

7: returnπi

8: end if

9: returnFailure

10: end procedure

5. SystemC and transaction level modelling

This work uses SystemC and the TLM 2.0 as modelling and simulation tools, so in this section they will be described.

5.1 SystemC

In the design of complex digital systems, obtaining a high-level executable specification of the project in early stages of the design process is useful for detecting errors or validate functionality prior to implementation. This is one of the main advantages of SystemC, a C++ class library for hardware design at various abstraction levels - from system level to Register Transfer Level (RTL). Figure 3 shows the typical design flow for SystemC projects [14].

Figure 3.

SystemC hardware design flow [14].

The SystemC library contains elements that facilitates representation of hardware systems parallelism. Hardware models in SystemC are represented by modules that may run in parallel interconnected by ports and channels (Figure 4). In this way, the initial model may contain a few modules representing system level functionality and, as the model gets refined, those initial high-level modules are further divided into more specific interconnected modules, until the RTL is reached [14].

Figure 4.

Typical SystemC RTL module [14].

5.2 Transaction level modelling

In hardware models of higher levels of abstraction, executing all modules at each time step may produce an unnecessary overhead. Thinking of a digital systems as components connected by a bus, reading from and writing to it, it would be more efficient to execute modules only when they have something massages to send/receive. This is the rationale behind Transaction Level Modelling (TLM), the message exchange being called a transaction [15].

With SystemC, an implementation of the TLM called TLM 2.0 is provided. It inherits all the SystemC capabilities, mainly the module concept, extending it with sockets, transactions and payloads (Figure 5).

Figure 5.

TLM basic elements [15].

Advertisement

6. Proposed architecture

The hardware architecture proposed in this work takes full advantage of SystemC and TLM 2.0 capability of developing executable specifications from system level to RTL. In this sense, the approach employed was to obtain a high level model and validate its functionality using experiments in a robotics context.

The TLM model proposed is shown in Figure 6. It consists of modified SystemC model of the Reteprocessor presented in the authors previous work [10]. As can be seen in Figure 6, the instinctive module now implements a detailed Reteprocessor, that uses two Content Addressable Memories (CAMs) to implement the knowledge base and an auxiliary stage for test execution.

Figure 6.

TLM model of the SoC.

The Instruction Set Architecture (ISA) of the Reteprocessor described in [10] is still employed in this model, but now some tasks related to join node in the Rete network are performed separately in the Join Node Module.

7. Results

7.1 Problem domain and simulation environment

The experiments were performed using the Webots R2021a robotics simulator. In the context of the CAA, the reactive level of the agent was implemented inside this simulator, in the form of behaviours and controllers the interface with the environment. In the simulator, the planning problem domain was constructed: a simplified version of the blocks domain. The simulation consisted of three coloured boxes (red, green and blue) disposed in a given order around KUKA Youbot robot, which is a mobile robot with a robotic arm and a plate. The simulation environment and the robot in the initial state are shown in the Figure 7.

Figure 7.

Simulation environment for start state.

The planning problem consisted of reordering the blocks from the initial position shown in Figure 7 so that the red block is in the left side or the arm, the blue in the right and the green in the front.

7.2 Cognitive module results

The operation of this module will be presented through a sequence diagram. The first part of this diagram, shown in Figure 8, shows how this level expanded the planning graph until what was labelled as last expansion.

Figure 8.

Sequence diagram for graph expansion.

The Reteand Expansion TLM modules together expand the planning graph: the current state is given as an input for the Retemodule, that gives in return the next action layer. This is done 3 times, until action layer A2 is reached. The Expansion Module then processes the consequence of the newly added actions, updating the state layer. But this time, the goal state is present in the state layer, so a transaction is sent to the search module to backtrack the goal state checking if the actions that produced are mutex with any other. If no mutex relation is found, those actions form the plan. And, as shown in Figure 9, this plan is, indeed, found in the first backtrack attempt.

Figure 9.

Sequence diagram for finding a plan (continuation ofFigure 8).

As can be seen in Figure 9, during the search for a solution, the expansion continues to take place, but is interrupted when the Search Module reports the solution. The plan found to the given problem was composed by the actions move(green, right, front), move(blue, left, right)and move(red, back, left).

7.3 Instinctive module results

The reactive behaviours for the robotic arm were defined as: going to a reset position; moving to left, right, front or back; grip and release. In order to execute the actions produced by the cognitive level, a knowledge base was created and compiled for the Reteprocessor using its application specific ISA. The rules in this knowledge based were “grab” and “put”. Both has as precondition that the arm is in the reset position and variables to specify the side where to grab from and the side where to put. The sequence of reactive behaviours activate by the instinctive level is show for the execution of the first action of the plan (move(green, right, front)) in Figure 10.

Figure 10.

Sequence of arm configurations and the reactive behaviours executed between them.

8. Conclusions

This chapter presented a SoC for cognitive agents that can perform symbolic computations at the hardware level. The cognitive model of the CAA was used as a reference for the hardware system-level model development, mapping its instinctive level to module with an application specific processor that executes the Retematching algorithm, and with its cognitive level mapped into a module specifically designed for running the Graphplanplanning algorithm (also with the use of the Reteprocessor). The SystemC and the TLM were used to build executable specifications that could validate its functionality in a robotics context. This version of the model was presented in a unified fashion, using SystemC/TLM modules and threads for the executable specification generation.

The results shown that the planning problem was solved by the Cognitive Module of the proposed architecture and successfully executed by its Instinctive Module, that consists of a Reteprocessor. By using a parallel architecture, the Cognitive Module broke the planning task into concurrent tasks in such a way that the backtrack search of the plan could take place while the graph were still expanding, as shown in Figure 9. In a complex planning problem this is advantageous because the solution usually does not come from the first backtrack search; thus, by not stalling the graph expansion, performance is gained.

In future works, tests with more complex knowledge bases and planning domains will be performed. Also, further refinements should be made in the architecture aiming synthesis.

© 2021 The Author(s). Licensee IntechOpen. This chapter is distributed under the terms of the Creative Commons Attribution 3.0 License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

How to cite and reference

Link to this chapter Copy to clipboard

Cite this chapter Copy to clipboard

Diego Stéfano Fonseca Ferreira, Augusto Loureiro da Costa, Wagner Luiz Alves De Oliveira and Alejandro Rafael Garcia Ramirez (July 16th 2021). System Level Design and Conception of a System-on-a-Chip (SoC) for Cognitive Robotics, Robotics Software Design and Engineering, Alejandro Rafael Garcia Ramirez and Augusto Loureiro da Costa, IntechOpen, DOI: 10.5772/intechopen.98643. Available from:

chapter statistics

62total chapter downloads

More statistics for editors and authors

Login to your personal dashboard for more detailed statistics on your publications.

Access personal reporting

Related Content

This Book

Next chapter

Quadrotor Unmanned Aerial Vehicles: Visual Interface for Simulation and Control Development

By Manuel A. Rendón

Related Book

First chapter

Introductory Chapter: The Role of Assistive Technologies in Smart Cities

By Marion Hersh, Marcelo Gitirana Gomes Ferreira and Alejandro Rafael Garcia Ramirez

We are IntechOpen, the world's leading publisher of Open Access books. Built by scientists, for scientists. Our readership spans scientists, professors, researchers, librarians, and students, as well as business professionals. We share our knowledge and peer-reveiwed research papers with libraries, scientific and engineering societies, and also work with corporate R&D departments and government entities.

More About Us