Open access peer-reviewed chapter

Mobile Robot Feature-Based SLAM Behavior Learning, and Navigation in Complex Spaces

Written By

Ebrahim A. Mattar

Submitted: 15 February 2018 Reviewed: 28 August 2018 Published: 05 November 2018

DOI: 10.5772/intechopen.81195

From the Edited Volume

Applications of Mobile Robots

Edited by Efren Gorrostieta Hurtado

Chapter metrics overview

1,149 Chapter Downloads

View Full Metrics

Abstract

Learning mobile robot space and navigation behavior, are essential requirements for improved navigation, in addition to gain much understanding about the navigation maps. This chapter presents mobile robots feature-based SLAM behavior learning, and navigation in complex spaces. Mobile intelligence has been based on blending a number of functionaries related to navigation, including learning SLAM map main features. To achieve this, the mobile system was built on diverse levels of intelligence, this includes principle component analysis (PCA), neuro-fuzzy (NF) learning system as a classifier, and fuzzy rule based decision system (FRD).

Keywords

  • SLAM
  • PAC
  • NF classification
  • fuzzy rule based decision
  • navigation

1. Introduction

1.1. Study background

Interactive mobile robotics systems have been introduced by researcher’s worldwide. The main focus of such research directions, are how to let a mobile robotic system to navigate in an unstructured environment, while learning its features. To meet these objectives, mobile robots platforms are to be equipped with AI tools. In particular to achieve this, Janglová in [1], describes an approach for solving the motion-planning problem in mobile robot control using neural networks-based technique. The proposed system consists of a head artificial neural network, which was used to determine the free space using ultrasound range finder data. In terms of maps building with visual mobile robot capabilities, a remote controlled vision guided mobile robot system was introduced by Raymond et al. [2]. The drive of the work, was to describe exploratory research on designing remote controlled emergency stop and vision systems for an autonomous mobile robot. Camera modeling and distortion calibration for mobile robot vision was also introduced by Gang et al. [3]. In their paper they presented an essential camera calibration technique for mobile robot, which is based on PIONEER II experiment platform. Bonin-Font et al. [4], presented a map-based navigation and mapless navigation, as they subdivided in metric map-based navigation and topological map based navigation. Abdul et al. [5] have introduced a hybrid approach for vision based self-localization of autonomous mobile robots. They presented a hybrid approach towards self-localization of tiny autonomous mobile robots in a known but highly dynamic environment. Kalman filter was used for tracking of the globally estimated position. In [8], Filliata and Meyer, presented a 3-level hierarchy of localization strategies, and a direct position inference, single-hypothesis tracking, and multiple-hypothesis tracking.

They stated the advantages and drawbacks of these strategies. In [6], Andreja et al. have presented a fuzzy ART neural architecture for robot map learning and navigation. Araujo proposed methods that are integrated into a navigation architecture. Further, intelligence based navigation was further discussed by [9, 10, 11]. In [12], Vlassis et al., motioned, “method for building robot maps by using a Kohonen’s self-organizing artificial neural network, and describe how path planning can be subsequently performed on such a map”. The built ANN related SOM is shown in Figure 1. Stereo vision-based autonomous mobile robot was also given by Changhan et al. [13]. In their research, they proposed a technique to give more autonomy to a mobile robot by providing vision sensors. In [14], Thrun reported an approach that integrates two paradigms: grid-based and topological. The intelligent control of the mobile robot, was based on image processing was also given by Nima et al. [15]. In terms of leaning intelligent navigation, intelligent robot control using an adaptive critic with a task control center and dynamic database was also introduced by Hall et al. [16]. This involves development and simulation of a real time controller for an intelligent, vision guided robot. Such models are also necessary for sizing the actuators, tuning the controller, and achieving superior performance. A novel feature of the proposed approach is that the method is applicable to both robot arm manipulators and robot bases such as wheeled mobile robots. Stereo vision based self-localization of autonomous mobile robots was furthermore introduced by Abdul et al. [17]. In reference to the work presented, a vision based self-localization of tiny autonomous mobile robots in a known but highly dynamic environment. A learning mobile robots was shown by Hall et al. [18]. They presented a discussion of recent technical advances in learning for intelligent mobile robots. Novel application of a laser range finder with vision system for wheeled mobile robot was presented by Chun et al. [19], where their research presents a trajectory planning strategy of a wheeled mobile robot in an obstructed environment. A vision-based intelligent path following control of a four-wheel differentially driven skid steer mobile robot was given by Nazari and Naraghi [20]. In this work, a Fuzzy Logic Controller (FLC) for path following of a four-wheel differentially skid steer mobile robot is presented. Color learning and illumination invariance on mobile robots survey was given by Mohan et al. [21]. A major challenge to the widespread deployment of mobile robots is the ability to function autonomously. Two arms and two legs like an ape, was aimed to study a variety of vision-based behaviors. In addition, robot based on the remote-brained approach was given by Masayuki et al. [22]. Localization algorithm of mobile robot based on single vision and laser radar have been presented by Xiaoning [23]. In order to increase the localization precision of mobile robot, a self-localization algorithm based on odometry, single vision and laser radar is proposed. The data provided by odometry, single vision, and laser radar were fused together by means of an Extended Kalman filter (EKF) technique. Mobile robot self-localization in complex indoor environments using monocular vision and 3D model, was moreover presented by Andreja et al. [24], they considered the problem of mobile robot pose estimation using only visual information from a single camera and odometry readings. Human observation based mobile robot navigation in intelligent space was also given by Takeshi and Hashimoto [25], they investigated a mobile robot navigation system which can localize the mobile robot correctly and navigate based on observation of human walking. Similar work was also given by Manoj and Ernest [26].

Figure 1.

ANN-self organizing maps, for learning mobile robot navigation [1, 6, 7, 8].

1.2. Research objectives

Given the previous background, this current presented work is focusing on learning navigation maps with intelligent capabilities. The system is based on PCA representation of large navigation maps, Neuro-fuzzy classifier, and a fuzzy decision based system. For bulky amount of visual and non-visual mobile data measurements (odometry, and the observations), the approach followed, is to reduce the mobile robot observation dimensionality using principle component analysis (PCA), thus to generate a reduced representation of the navigation map (SLAM), refer to Figure 2 for details. A learning system was used to learn navigation maps details, hence to classify the representations, (in terms of observation features). The learned system was employed for navigating maps, and other mobile robot routing applications.

Figure 2.

(a) A sample of SLAM details for learning, picture source robot cartography [27]. (b) Mobile robot training patterns generation. A space is represented by maps space’s basis.

Advertisement

2. Building navigation maps

2.1. Simultaneous localization and mapping (SLAM)

SLAM, is a routine that estimates a pose of a mobile robot, while mapping the environment at the same time. SLAM is computationally intensive, since maps represent localization, hence accurate pose estimate is needed for mapping. For creating navigation intelligence capabilities during path planning, this is achieved by learning spaces, once robot was in motion. This is based on learning path and navigation behavior. There are four important stages for building SLAM, this localization, map building and updates, searching for optimal path and planning. Optimal path search is done by A, occupancy grids mapping. Given a mobile robot control inputs U as set of controls, U1:k=u1u2uk, with mobile parameters measurements (mobile observations) as Z1:k=z1z2zk . For odometry, refer to Figure 3.

xk=xrm1m2..mnandCk=CrCRMnCMrRCMnE1

Figure 3.

Odometry, generation of navigation patterns for spaces, the (maps).

In reference to Figure 3, the mobile robot starts to move in the space, with a target to research into a predefined final position. While the robot in movement, a SLAM is built, and measurements are recorded from the mobile observations, as Z1:k=z1z2zk. All recorded observations are considered as inputs to the PAC, hence they are tabulated into predefined format, for later processing using the PCA algorithm.

2.2. Monte-Carlo (MC) localization

Monte-Carlo localization, is a well-known technique in literature, and still being used for localization parameters estimation. In sampling-based methods, one represents the density by a set of η random samples of particles. The goal is then to recursively compute at each time step k set of samples of Ζk that is drawn from density pxk:Zk. A particularly elegant algorithm to accomplish this has recently been suggested independently by various authors. In analogy with the formal filtering problem, the algorithm proceeds in two phases. In the first phase we start from a set of particles Sk1 computed in the previous iteration, and apply the motion model to each particle Sk1i by sampling from the density p(xkSk1i,uk1) for each particle Sk1i: draw one sample Ski from (xkSk1i,uk1). We have used a motion model and set of particles Sk1i to build an empirical predictive density function of:

p=fxyθsrsl,p=xyθ+sr+sl2cosθ+srsl2bsr+sl2sinθ+srsl2bsrslbE2

In Eq. (2), we describe a blended density approximation to pxkZk1. The environment, or the mobile robot space is highly redundant, once used to describe maps.

Advertisement

3. Principle component analysis (PCA)

3.1. PCA based statistically and dimensionality reduction

While in navigation, each traveled path, region, zone, etc. are characterized by diverse behavior (i.e. features), Figure 4. If x is matrix of representation for distances and measurements at each location during navigation, a covariance matrix for the set of maps is considered highly non-diagonal. Mathematically, the previous notation is:

ρ=XXtσ11Xσ1,w×hXσw×h,1Xσw×h,w×hXE3

Figure 4.

Robot navigation spaces. A representation of navigation segments, zones, areas, ….. (S1, S2, S3, …. Sn, Z1, Z2, Z3, …. Zm, A1, A2, A3, …. An).

σijX represents covariance between distances for location (w) and location (h). There is a relation between covariance coefficients and correlation coefficients. The covariance matrix ρ is expressed as:

ρ=1jn=1jλn×λnTE4

Since principal components are calculated linearly, let Ρ be a transformation matrix:

Y=PT×XandX=P×Y

In fact, P=P1, since the P’s columns are orthonormal to each other, PT×P=I. Now, the question is, what is the value of Ρ given the condition that Sy must be a diagonal matrix, i.e.

Sy=Y×YT=PT×Χ×ΧT×PSy=PT×Sy×P

in such away Sy is a rotation of Sx by Ρ . Choosing Ρ as being a matrix containing eigenvectors of Sx:

Sx×P=Λ×P

where Λ is a diagonal matrix containing eigenvalues of Sx. In this regard,

Sy=PT×Λ×P=Λ×PT×P=Λ

and Sy is a diagonal matrix containing eigenvalues of Sx. Since the diagonal elements of Sy are the variance of components of training patterns in the (in navigation) space, the eigenvalues of Sx are those variances.

This is further expanded into:

ρ=β1,1β¯1β1,kβ¯kβj,1β¯1βj,kx¯kβ1,1β¯1βj,1β¯1β1,kβ¯kβj,kβ¯kE5

Finally, covariance matrix ρ is further expressed by:

ρ=covβ1β1covβ1βkcovβkβ1covβkβkE6

The p, is a symmetric (around the main diagonal), and a Rn×n matrix. The diagonal of, represents the covariance between the matrix and it self. To recognize normalized dataset for patterns, the covariance ρ of Eq. (6) plays an important role. This can be achieved by getting the eigenvectors of covariance ρ matrix of Eq. (6). Given this background, therefore we need to compute eigenvalues and eigenvectors using numerical approach. For a Rk×k matrix ρ, if we search for a row vector Rk×1 X that could be multiplied by ρ and get the same vector X multiplied by eigenvalues λ and eigenvector. Matrix ρ transforms the vector X to scale positions by an amount equal to λ, gives a transformation matrix:

(ρX)=λXE7

In reference to Eq. (7), and for a Rk×k matrix ρ, we shall compute for (k) eigenvalues. The (k) eigenvalue (λ), are hence used for scaling every (k) eigenvectors. Individual eigenvalues (λ), are also found by solving the below defined identity as expressed by Eq. (8):

ρI×λX=0E8

where I is an identity matrix. We shall compute for the determinant of Eq. (8), i.e., |ρIλ=0, while solving for the eigenvalues, λ. While substituting for the (λ) in Eq. (8), and solving for (X), this will result in finding the eigenvector (X), once λ are satisfying the following:

ρI×λ=0E9

Following Eq. (5) to Eq. (9), and computing for eigenvalues, hence reordering the eigenvalues according to a descending order, this represents a major step in building a PAC based recognition system for the mobile robot dataset generated by the navigation system.

Advertisement

4. Learning system: learning mobile robot navigation maps

4.1. Feature-based SLAM learning

While stating in Section 3 steps for PAC computations, in this section we shall make a focus on building a learning system for the mobile navigation. In reference to Figure 4, the mobile robot will be generating navigation dataset. Dataset is generated at different locations, during the mobile robot motion. Navigation dataset involves sensory measurements, odometry, and locality information (i.e. zones, areas, segments …), refer to Figure 4. An important part of the dataset, is also the part generated by the SLAM, as already described in Section 2. It is not achievable to encompass all dataset, as this is massive dataset. However, we shall rely on features of the mobile dataset, i.e. the PCA based features of navigation dataset, (the SLAM features). Robot navigation spaces. A representation of navigation segments, zones, areas, ….. are designated as (S1, S2, S3, …. Sn, Z1, Z2, Z3, …. Zm, A1, A2, A3, …. An). For each of such different segments, zones, and areas of navigation, there will be features associated with it. Features during navigation will be used for further processing. This includes a five layers feature learning NF architecture (classifier), and a fuzzy decision system (Figure 5).

Figure 5.

The recognition system. PAC for SLAM features computations, a five layers neuro-fuzzy classifier, and last a fuzzy decision based system.

4.2. Neuro-fuzzy features classifier (NFC) architecture

For the case of fuzzy decision making system, it is essential to incorporate a priori knowledge for the mobile movements in the space. In this respect, many conventional approaches rely on depth physical knowledge describing the system. An issue with fuzzy decision is that knowledge are mathematically impervious. This results in and there is no formal mathematical representation of the system’s behavior. This prevents the application of conventional empirical modeling techniques to fuzzy systems, making knowledge validation and comparison hard to perform. Creating a benchmark measure of performance by a minimum distance classifier.

Decision rule adopted by such system, is to assign Χ to a class whose mean feature vector is closest (Euclidean Distance) to Χ. A decision is given by:

dd¯1dd¯2dh2elsedh2E10

Rule-based structure of fuzzy knowledge allows for integrating heuristic knowledge with information obtained from process measurements. The global operation of a system is divided into several local operating conditions. Within each region Ri, a representation:

Riŷik=ijoχijyk+ijhψijukforh=1,2,rE11

In reference to Figure 6, for Eq. (11), ŷi is the computed fuzzy output, u is the system input, in the ith operating region, (h) is the number of fuzzy operating regions. In addition, both (i) and (o) do represent the time lags in the input and the output, respectively, μi is the membership function. Finally, χij and ψij are the few parameters. The membership function for inputs, is constructed in a number of ways.

Figure 6.

NF classifier architecture. The architecture is used to classify features of navigation maps.

The fuzzy knowledge system (Neuro-fuzzy) illustrated in Figure 6, is an exceptional architecture of network topology. This architecture combines advantages of fuzzy reasoning and the classical neural networks. In its broader sense, the architecture rule ri, demonstrates a relation between the input map feature space, and named classes. This is further expressed as follows:

RuleriifχsiandχsjisAijandχsnisAin,class name isCkE12

In Eq. (12), the Gaussian membership function is defined as:

μijχsj=expχsjcij22σij2

μij(χsj) is the membership grade of ith rule and jth. That is, the (if) parts of the rules are same as in the ordinary fuzzy (if-then) rules, (then) parts are some combinations of the input variables. For each ith node in this layer is a square node with a node function.

αis=j=1nμijνsjE13

where νsj is the input to ith node given as the linguistic label (small, large, .. etc.) associated with this node function, n is the number of features. The membership is a bell-shape type, and ranged between (1 and 0).

Osk=βskl=1kβslE14

As values of these parameters change, membership shaped functions vary accordingly, thus exhibiting various forms of membership functions on the linguistic label Ai. For every node in this layer, there is a circle node which multiplies incoming signals and sends their product out. Stages of the adopted Neuro-fuzzy classifier, is shown in Figure 6.

χi=μAixk1×μBiyk2fori=1,2E15

The output node computes the system output as summation of incoming signals, i.e.:

Xio=iY¯ifiXio=iY¯ifiiY¯iE16

More precisely, the class label for the sth sample is obtained by the maximum Osk value as follows:

Cs=maxk=1,2,.KOskE17

The consequent parameters thus identified are optimal (in the consequent parameter space) under the condition that the premise parameters are fixed. The knowledge system’s weights are conventionally identified by performing maximum likelihood estimation. Given a training data set Zn=ykxkk=ln, the task is to find a weight vector which minimizes the following cost function:

Jnw=1nk=1nykŷxkw2E18

As the knowledge based system, ŷxkw is much interrelated with respect to the weights, linear optimization techniques cannot be applied. The adopted Neuro-fuzzy system has number of inputs (n) (representing the features) and (m) outputs (representing classes of features). In reference to Figure 7, there are dataset about the mobile area and zone of navigation. This is due to the large amount of information coming from the visual system. Here comes the potential of employing the PCA to reduce the dimensionality of the input spaces.

Figure 7.

Data from navigation spaces in different zones and areas, (z1, z2, z3, …. zm, a1, a2, a3, …. An), they represent inputs to the PCA.

4.3. Fuzzy decision based system

The last stage of the mobile robot maps learning system, is the fuzzy decision system. Within this stage, the hard decisions are undertaken by the mobile robot system during a course of navigation. A fuzzy system is constructed typically from the following rules:

D=GCμDa=μGaμGCa,foraAa=ARGmaxμGaμca,..aAE19

The rules (if) parts, are identical to an ordinary fuzzy IF-THEN rules. Given an fuzzy inputs u=u1u2..unT the output ŷk of a fuzzy system is computed as the weighted average of the ysl, that is:

ŷk=i=lmylwli=lmwlE20

weights wi are computed as

wi=i=1nμcilxiE21

A dynamic TSK fuzzy system is constructed from the following rules:

ifykisAlPδ(ykkn+lisAnPδukisBPyn+1=alpyk+anPykn+l+bPukE22

where AnP and BP are fuzzy sets, alp and bP are constants, p=12.nuk are the inputs to the system, and uk=(x1k, x2k,,xn+1k) is the fuzzy system knowledge vector. Typically, the output of the fuzzy decision based system is computed as:

xk=p=1nxpβvpp=1nvpE23

where xpk1 is given in Eq. (23) and:

vp=i=1nμAipxkμBpukE24

The mobile robot training datasets, do consist of four inputs. There are also four output parameters. This is summarized in Tables 1 and 2.

InputsOutputs
1xRobot ZoneO1Behavior1
2yRobot AreaO2Behavior2
3zRobot SegmentO3Behavior3
4wDelicate ObservationsO4Behavior4

Table 1.

Fuzzy decision based system input-outputs representation.

InputsOutputs
1First mobile stimuliIdentification of zone of navigationZones identification, and obstacles in zones, z1, z2, z3, z4, z5, ……. zmFirst mobile behavior. Behavior1Rotate around, move robot forward, move robot backward, rotate right, rotate left, …
2Second mobile stimuliIdentification of area of localityAreas identification, and obstacles in areas. Obstacles in a1: area floor, obstacles in a2: Out_Corridor, Obstacles in a3: building. Entry, Obstacles in a4:Second mobile behaviour. Behavior2Image focus, image capture, … image processing of a scene.
3Third mobile stimuliIdentification of segment of navigationObstacles at different segments within an area, ..Third mobile behaviour. Behavior3Video recording zooming with video capture, ..
4Fourth mobile stimuliMobile robot delicate sensory observationRotate around, move robot forward, move robot backward, rotate right, rotate left, …Fourth mobile behaviour. Behavior4Delicate mobile action.

Table 2

Neuro-fuzzy classifier, input-outputs representation.

Advertisement

5. The experimentation

Within this section, we shall discuss few experimentations results. In order to implement the proposed navigation methodology, the (914 PC BOT) has been reengineered in such a way to allow more control and observations to be communicated through. The main high-level coding was achieved using Matlab. Matlab toolboxes have been integrated in such a way to allow PCA computation, Neuro-fuzzy learning capabilities, and fuzzy decision making routines. This is further indicated to in Figure 8.

Figure 8.

Implementation system hierarchy.

5.1. Behaviour knowledge building

For building the mobile robot behaviour at localities, the mobile system was maneuvered over a space in the laboratory for several trails. Typical physical readings from the robot odometry, and sensory observations were recorded. Typical readings are shown in Figure 9. Different mobile behaviors (for learning) were also recorded, beside the odometry, and sensory observations.

Figure 9.

Mobile robot real sensory observations, by experimentation. Dataset have been collected through a number of runs for the PCA.

5.2. Navigation intelligence

Building the mobile robot navigation intelligence is the next phase. This phase requires blinding all the previous inputs (readings, situations, and behaviors). This will help to take the most appropriate actions. The designated learning and decision making architecture is a Neuro-fuzzy. Typical information, that constitute the Neuro-fuzzy classifier inputs are:

The classifier inputs:

  1. ZONES of navigation. This represents typical zones where the mobile robot is located.

  2. AREAS of navigation. This represents typical areas where the mobile robot is moving.

  3. SEGMENT of navigation. This represents typical segment where the mobile robot is moving.

  4. OBSERVATIONS. This represents typical Observations, the mobile is experiencing at a locality.

The classifier outputs:

  1. First mobile behavior. Behavior1, Rotate Around, Move Robot forward, Move Robot Backward, Rotate Right, Rotate Left, ….

  2. Second mobile behavior. Behavior2, Image Processing of a Scene, … .

  3. Third mobile behavior. Behavior3, Video Recording.

  4. Fourth mobile behavior. Behavior4, Delicate Mobile Action.

With such inputs and system outputs, a good degree of a mixture of mobile behaviors can therefore be created. This is further listed below:

The implementation system hierarchy, is shown in Figure 8. An adequate of mobile intelligence was created for a mobile navigation within hazardous environments. Inputs to the Neuro-fuzzy decision based system are coming from the PCA network.

5.3. Fuzzy if-then decision system

In addition, the four system inputs-outputs, do represent the system outputs the mobile robot should undertake also at any particular situation. While relying on the fuzzy (if-then) statements, we are able to make further final decision to be undertaken by the mobile robot. Within this sense, we are able to build an (if then statement), as follows:

Typical Fuzzy Rules are:

  1. If (Input_#1 is ….. and Input_#2 is ….) then (Output_#1 is …. and Output_2 is ….) .. ...

  2. If (Input_#3 is ….. and Input_#2 is ….) then (Output_#4 is …. and Output_2 is ….) .. ...

  3. If (Mobile is in zone1 ….. and in area1 ….) then (do image FOUCS).

  4. If (Mobile is in zone1, ….. and in segment2 and, Mobile special task) then (set an ALAM and GAZE).

  5. If (Mobile is in zone3 and in area5 and segment 3, and Image Capture), … then (do image analyze).

  6. If (Mobile in zone41 and in area2 and, … and Special task, then (move back).

Given the defined navigation strategy, the mobile robot is able to undertake much detailed navigation and behaviors tasks.

Advertisement

6. Conclusions

Learning mobile robot navigation behavior, is an essential feature, for improved navigation. In addition, it helps to gain further understanding about the spaces of navigation. In this study, navigation maps details have been created while relying on dataset collected by SLAM routines. Due to enormous sensory and environmental data observation to be analyzed during navigation, we have reduced the dimensionally and size of environmental and sensory observation information with PCA technique. Reduction of environment information (i.e. getting features), are hence used as learning inputs to a neuro-fuzzy classifier. Examples of Neuro-fuzzy feature inputs are, navigation locations, areas, …, and behaviors related to particular localities. The final stage of mobile robot map building is a fuzzy decision based system. Within this stage, mobile robot navigation decisions are undertaken. With multi-levels of mobile robot sensory and navigation observation dataset, we have designed a learning system for mobile robot maps learning with navigating capabilities.

References

  1. 1. Janglová D. Neural networks in mobile robot motion. Inernational Journal of Advanced Robotic Systems. 2004;1(1):15-23 ISSN 1729-8806
  2. 2. Raymond A, Tayib S, Hall L. Remote controlled, vision guided, mobile robot system. In: Intelligent Robots and Computer Vision XVI: Algorithms, Techniques, Active Vision, and Materials Handling, vol. 3208; Proceedings of SPIE. The International Society for Optical Engineering; 1997. pp. 126-132
  3. 3. Gang P, Xinhan H, Jian G, Cheng L. Camera modeling and distortion calibration for mobile robot vision. In: Proceedings of the 7th World Congress on Intelligent Control and Automation (WCICA), WCICA'08. 2008. pp. 1657-1662
  4. 4. Bonin-Font F, Ortiz A, Oliver G. Visual navigation for mobile robots: A survey. Journal of Intell Robot System. 2008;53:263-296. DOI: 10.1007/s10846-008-9235-4
  5. 5. Abdul B, Robert S, Yahya K, Gloom M. A hybrid approach towards vision based self-localization of autonomous mobile robots. In: Proceedings of the International Conference on Machine Vision, ICMV-2007. 2007. pp. 1-6
  6. 6. Araujo R. Prune-able fuzzy ART neural architecture for robot map learning and navigation in dynamic environments. IEEE Neural Network Transactions. 2006;17(4)
  7. 7. Vlassis N, Papakonstantinou G, Tsanakas P. Robot map building by Kohonen’s self-organizing neural networks. In: Proceedings of 1st Mobinet Symposium; Athens, Greece. 1997
  8. 8. Filliata D, Meyer J. Map-based navigation in mobile robots: I. A review of localization strategies. Cognitive Systems Research. 2003;4:243-282
  9. 9. Miftahur R, Rahman M, Rizvi H, Haque Abul L, Towhidul IM. Architecture of the vision system of a line following mobile robot operating in static environment. In: Pakistan Section Multitopic Conference (INMIC 2005). 2005
  10. 10. Tomohiro S, Yoshio M, Taichi K, Masayuki I, Hirochika I. Development and integration of generic components for a teachable vision-based mobile robot. IEEE/ASME Transactions on Mechatronics. 1996;1(3):230-236
  11. 11. Ryosuke M, Fumiaki T, Fumio M. Skill acquisition of a ball lifting task using a mobile robot with a monocular vision system. In: IEEE International Conference on Intelligent Robots and Systems. 2006. pp. 5141-5146
  12. 12. Vlassis N, Papakonstantinou G, Tsanakas P. Robot map building by Kohonen’s self-organizing neural networks. In: Proceedings of the 1st Mobinet Symposium; Greece. 1997
  13. 13. Changhan P, Sooin K, Joonki P. Stereo vision-based autonomous mobile robot. In: Intelligent Robots and Computer Vision XXIII: Algorithms, Techniques, and Active Vision; vol. 6/6. Proceedings of SPIE. The International Society for Optical Engineering; 2005
  14. 14. Thrun S. Learning metric-topological maps for indoor mobile robot navigation. Artificial Intelligence Journal. 1998;99:21-71
  15. 15. Nima F, Mohammad T, Sadaf S. Intelligent real time control of mobile robot based on image processing. IEEE Proceedings Intelligent Vehicles. 2007;IV:410-415
  16. 16. Hall EL, Ghaffari M, Liao X, Alhaj ASM. Intelligent robot control using an adaptive critic with a task control center and dynamic database. In: Intelligent Robots and Computer Vision XXIV: Algorithms, Techniques, and Active Vision, vol. 6384. Proceedings of SPIE. The International Society for Optical Engineering; 2006
  17. 17. Abdul B, Robert S, Jason G, Yahya K, Usman M, Muhammad H, et al. Stereo vision based self-localization of autonomous mobile robots. In: Proceedings (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics); Robot Vision—Second International Workshop, RobVis., vol. 493-1 LNCS. 2008. pp. 367-380
  18. 18. Hall L, Liao X, Alhaj AM. Learning for intelligent mobile robots. In: Proceedings of SPIE, vol. 5267. The International Society for Optical Engineering; 2003. pp. 12-25
  19. 19. Ya-Chun C, Hidemasa K, Yoshio Y. Novel application of a laser range finder with vision system for wheeled mobile robot. In: Proceedings of the IEEE/ASME International Conference on Advanced Intelligent Mechatronics, AIM. 2008. pp. 280-285
  20. 20. Nazari V, Naraghi M. A vision-based intelligent path following control of a four-wheel differentially driven skid steer mobile robot. In: Proceedings of the 10th International Conference on Control, Automation, Robotics and Vision, ICARCV-2008. 2008. pp. 378-383
  21. 21. Mohan S, Peter S. Color learning and illumination invariance on mobile robots: A survey. Robotics and Autonomous Systems. 2009;57(6–7):629-644
  22. 22. Masayuki I, Fumio K, Satoshi K, Hirochika I. Vision-equipped apelike robot based on the remote-brained approach. In: Proceedings IEEE International Conference on Robotics and Automation, vol. 2. 1995. pp. 2193-2198
  23. 23. Xiaoning C, Yuqing H, Gang L, Jin G. Localization algorithm of mobile robot based on single vision and laser radar. In: Proceedings of the 7th World Congress on Intelligent Control and Automation, WCICA'08. 2008. pp. 7661-7666
  24. 24. Andreja K, Sanjin B, Ivan P. Mobile robot self-localization in complex indoor environments using monocular vision and 3D model. In: IEEE/ASME International Conference on Advanced Intelligent Mechatronics AIM. 2007
  25. 25. Takeshi S, Hashimoto H. Human observation based mobile robot navigation in intelligent space. In: IEEE/RSJ International Conference on Intelligent Robots and Systems, IROS 2006. 2006. pp. 1044-1049
  26. 26. Manoj K, Ernest L H. Intelligent robot control using omnidirectional vision. In: Proceedings of SPIE, vol. 1825. The International Society for Optical Engineering; 1993. pp. 573-584
  27. 27. Robot Cartograsphy: ROS + SLAM. http://www.pirobot.org/blog/0015/

Written By

Ebrahim A. Mattar

Submitted: 15 February 2018 Reviewed: 28 August 2018 Published: 05 November 2018