Abstract
This chapter presents a four-wheel robot’s trajectory tracking model by an extended Kalman filter (EKF) estimator for visual odometry using a divergent trinocular visual sensor. The trinocular sensor is homemade and a specific observer model was developed to measure 3D key-points by combining multi-view cameras. The observer approaches a geometric model and the key-points are used as references for estimating the robot’s displacement. The robot’s displacement is estimated by triangulation of multiple pairs of environmental 3D key-points. The four-wheel drive (4WD) robot’s inverse/direct kinematic control law is combined with the visual observer, the visual odometry model, and the EKF. The robot’s control law is used to produce experimental locomotion statistical variances and is used as a prediction model in the EKF. The proposed dead-reckoning approach models the four asynchronous drives and the four damping suspensions. This chapter presents the deductions of models, formulations and their validation, as well as the experimental results on posture state estimation comparing the four-wheel dead-reckoning model, the visual observer, and the EKF with an external global positioning reference.
Keywords
- 4WD
- visual odometry
- trinocular sensor
- EKF
- visual observer
- trajectory estimation
1. Introduction
Autonomous robots obtain precise information about their surroundings by deploying their sensing devices and developing perceptual tasks to accomplish useful missions. Intelligent robots require to concurrently execute multiple functions such as path planning, collision avoidance, self-localization, tasks scheduling, trajectory control, map building, environment recognition, kinematic/dynamic control, and so forth. Autonomous robots depend on multisensor fusion, which is the process of combining data from the physical sensors into a homogeneous data space.
This chapter presents robot’s visual odometry using sensor data obtained from a homemade radial multi-view device (Figure 1a). For this case, trinocular sensing is divergent; hence, an inherent problem refers to different perspectives in each camera. Besides, the partial overlap between adjacent cameras allows sharing approximately
Parallel trinocular stereo systems had been deployed either to detect the ground [1], or to estimate motion [2]. There are reported works on motion estimation with binocular divergent systems [3], trinocular divergence for visual odometry [4], and divergent visual simultaneous localization and mapping (SLAM) [5]. As a difference from the active sensing modalities for localization [6], and concurrent localization and mapping with parallel multi-view [7], this chapter intends to estimate the posture of a rolling vehicle by exploiting feedback of the rich data fusion that a divergent trinocular sensor provides. Numerous visual odometry algorithms had been reported, using stereo cameras [8], matching multi-frame features [9] and 3D point cloud [10]. Some outdoor visual odometry approaches for urban [11] environments estimate motion tracking by extraction of visual feature points. There are numerous works combining the benefits of visual SLAM algorithms [12, 13, 14] with visual odometry [15], detecting geometrical features [16].
This chapter is organized into to the following sections. Section 2 deduces the sensor fusion observer modeling the trinocular system geometry. Section 3 models the 4WD direct/inverse kinematic solutions. Section 4 deduces the visual odometry formulation and EKF-based control state and estimation. Finally, conclusions are provided in Section 5.
2. Trinocular sensing model
This section describes the divergent multi-view geometric model, which basically combines the data of a pair of cameras radially arranged. In addition, this section presents an algebraic analysis of the lateral cameras’ alignment and correction w.r.t. the central camera. The fundamental geometrical relationship of the system divergence was experimentally studied by deploying a homemade prototype onboard a mobile robot, see Figure 2a. Cameras with homogeneous intrinsic parameters are assumed, and cameras are mechanically fixed epipolar. The sensor model’s purpose is to determine the depth information of a point in the scene
To calculate the Cartesian coordinates of
The complementary angles
In the triangle
Thus, for the other cameras’ pair, similar expressions are stated
Hence, the model to express depth information is given by
where,
In addition, the range between camera
Using the depth models
and being
Furthermore, the algebraic deduction along the
thus the following term is stated as
Therefore, the geometry vector model for camera
and the same model is enhanced for camera
Hence, the arbitrary point
The four points shown by the three cameras may illustrate their transformation, experimentally developed at 1-m distance between the robot and the marks.
3. 4WD dead-reckoning controller
Since the visual trinocular approach uses an exteroceptive sensor, we decided to challenge its detection and tracking capabilities with a robot having high holonomic properties. A 4WD robot’s locomotion is prone to experience frequent swift turns resulting in numerous slippages. Thus, a 4WD has to depend more on exteroceptive rather than inner measurements. Comparatively, inner 4WD odometry differs greatly from external visual measurement to infer posture. The proposed dead-reckoning system obtains speed measurements by deploying odometer readings of the four asynchronous drives (Figure 2). A 4WD system is considerably different from a conventional differential dual approach. Moreover, four passive mass-spring-damper suspensions are included in this system (Figure 2b), which varies the inter-wheel distances over time. Particularly, the robot’s 4WD and passive suspensions make the posture observations challenging.
The robot’s dead-reckoning model is fundamental to sense and control position used as feedback, providing motion description as a kinematic reference to match the visual observations when estimating the robot’s motion. The positioning and trajectory control [17], as well as the type of kinematic analysis [18] and the dynamic suspension [19] in this type of robot have been previously reported. The robot’s instantaneous speed
where the wheel’s angular speed
Further, the differential velocity
This model describes that the rotary motion of
The previous equation expresses the conservation of angular motion, and the wheel’s contact point turns w.r.t. the robot’s center,
where for each length
as well as
Thus, substituting
The longitudinal contact point’s distance
where
From Figure 2a, the vertical motion
where the elastic spring restitution coefficient is
hence
with integration constant
Substituting the previous expression in (22),
and by algebraically simplifying, the characteristic equation is
and its analytic solution is
As we assume a critically damped system,
Therefore, the damping motion is analytically solved by
where
and
There is a maximal allowable Z-turn displacement speed
In addition, to inversely solve this matrix system, the analytical solution represents the vector of independent control rotary variables
and
The matrix form of the inverse analytical solution for all wheels’ speed under damping variations is stated as
where
4. State estimation and feedback position control
This section formulates a deterministic geometric model for visual odometry and the state estimation by an EKF. The proposed model combines pairs of key-points at times
In Figure 3a, the instantaneous angle
and such key-points’ distance
The angle
However, at time
with the key-point’s distance
which is used to obtain the angle
Further, the differential angle
The triangulation angle
and the orientation angle for each reference
which is required to know the
as well as the
When obtaining numerous key-point pairs simultaneously, the total robot’s displacement is an averaged value of the displacements yielded by all key-point pairs,
Therefore, without loss of generality, for state estimation, let us assume a nonlinear robot’s model state vector
where the state vector is
by developing the dot product from previous expression, we obtain
The measurement model requires the displacements that were inferred through key-point triangulation
where
Thus, the linearized models of the process and measurement are defined next in (54) and (55), such that
and
In addition, the EKF’s prediction models (56) and the correction models (57) are formulated and linearized as
and
Moreover, the recursive Kalman gain for system convergence is
and the state vector of the system is described by
with covariance matrix of the system
Thus, hereafter, the vector and matrix models describing the proposed robot’s system are formulated and incorporated into the conventional EKF. Let us define the robot’s pose vector
Therefore, from the displacement equation (46), which arises from exteroceptive observations, the robot’s Cartesian displacements are
and
as well as
By substituting an averaged Cartesian displacement, one considers
Thus, a measurement is a 3D point arising from either divergent pair Eq. (10) or (11) and deployed by Proposition 1.1. Thus, the robot’s measurement vector model
The process noise covariance matrix
Let us define the nonstationary covariance matrix
the matrix diagonal variances are experimental measurements that describe the trend of the robot motion’s error.
The robot’s motion covariance matrix was obtained experimentally through 500 tests—straight motion, right turns, left turns, clockwise and counterclockwise rotations, with
and
as well as the robot’s yaw statistical measurement model
Furthermore, the measurement noise covariance matrix is
and the matrix
The matrix
Let us summarize the 3D points
4.1. State feedback position control
This section describes in six general steps the combined use of the visual observers and the EKF geometric odometer as a recursive feedback for the robot’s positioning control. The robot’s deterministic kinematic model conveys predictions about the robot’s geometry of motion and its observations. Therefore, the deterministic model is used to infer the robot’s motion observations implicitly by the trinocular sensor. The following formulation illustrates how the EKF and the visual odometry model are fed back for the 4WD kinematics.
4.1.1. Kalman gain
The initial estimate of Kalman gain is
4.1.2. Observation
From Proposition 1.1, the visual observers provide
The angle of each key-point
and the local angle of the robot w.r.t. the robot’s previous position is
thus the inferred displacement is
Therefore, the observation vector with Gauss noise
4.1.3. Update estimate
The update estimate is obtained by
4.1.4. Update error covariance
The covariance matrix error dispersion of the system is updated
4.1.5. Deterministic control model
Therefore, the prediction is firstly obtained through the robot’s inverse position control model, from the inverse kinematics equation, Eq. (38)
where, in the previous expression,
This step converges until
4.1.6. State prediction
It follows that state prediction is
and the error dispersion covariance matrix is also predicted at
From the previous step, the estimation process repeats again, going to step one. The previous Kalman process is performed until the robot reaches the goal and the estimation error converges by numerical approximation according to
Therefore, Figure 4a shows the robot’s trajectory obtained by the different comparative approaches conducted in this study. The postures measured by an external visual global reference system are the main references to be compared with. The EKF estimation was obtained by the use of Theorem 1, Proposition 1.1, and Eqs. (71)–(77). In addition, the trinocular key-points used as inputs of the visual odometry model inferred the robot’s displacements, which are shown in same Figure 4a. Furthermore, the dead-reckoning robot system was deployed to infer the robot’s postures and is also shown in Figure 4a. Raw odometry refers to the robot’s dead-reckoning kinematic model used as a mean for direct posture observation through direct kinematics (33) and inverse kinematics (38), but using direct encoder readings by (14).
Figure 4b shows the dead-reckoning and the EKF Cartesian absolute errors, taken as main reference for the visual global reference system. As for the direct dead-reckoning measurements, the absolute error grows exponentially, where the position observation starts diverging before the robot reaches the third turn. As for the EKF model, the Cartesian error w.r.t. the global reference does not diverge but preserves bounded error magnitudes.
As for Figure 4c and d, the EKF’s Cartesian and angular absolute errors w.r.t. the global visual tracker are shown. In Figure 4d, the local minimums and maximums determine the Cartesian regions where the robot performed its turns.
Finally, Figure 5a shows the covariance error behavior obtained at each control loop during the EKF recursive calculations. Figure 5b is a mapping of the measured key-points registered using the state vector (posture) of a robot’s turn to illustrate the map’s divergence.
5. Conclusion
This chapter presented a visual odometry scheme for a trinocular divergent visual system that was combined with an EKF for visual odometry estimation. The proposed trinocular geometric model observer geometrically combined adjacent radial views. About
The robot’s trajectory was obtained by different comparative approaches conducted in this study. The postures were measured by an external visual global reference system, which was the main reference system to be compared with. The robotic platform’s kinematics was modeled in terms of a dead-reckoning approach. The direct and the inverse solutions were combined to produce a recursive linearized control model and this was used as the prediction model for EKF estimator. The dead-reckoning robot system was deployed to infer the robot’s postures using directly the four encoders’ readings, with good results obtained only for very short paths. As a comparative perspective, using only the 4WD dead-reckoning system the posture exponentially diverged.
We found bounded Cartesian error for this 4WD robot by deploying the EKF. The trinocular 3D key-points were used as inputs of the visual odometry model that inferred the robot’s displacements by geometrical triangulations.
References
- 1.
Milella A, Reina G. Towards autonomous agriculture: Automatic ground detection using trinocular stereovision. Sensors. 2012; 12 (9):12405-12423 - 2.
Gwo-Long L, Chi-Cheng C. Acquisition of translational motion by the parallel trinocular. Information Sciences. 2008; 178 :137-151 - 3.
Tae Choi B, Kim J-H, Jin Chung M. Recursive estimation of motion and a scene model with a two-camera system of divergent view. Pattern Recognition. 2010; 43 (6):2265-2280 - 4.
Jaeheon J, Correll N, Mulligan J. Trinocular visual odometry for divergent views with minimal overlap. In: IEEE Workshop on Robot Vision; 2013. pp. 229-236 - 5.
Dellaert F, Kaess M. Probabilistic structure matching for visual slam with a multi-camera rig. Computer Vision and Image Understanding. 2010; 114 (2):286-296 - 6.
Lee C-H, Kwon S, Lee J-h, Lim Y-C, Lee M. Improvement of stereo vision-based position and velocity estimation and tracking using a stripe-based disparity estimation and inverse perspective map-based extended kalman filter. Optics and Lasers in Engineering. 2010; 48 (9):859-886 - 7.
Harmat A, Wang DWL, Sharf I, Waslander SL, Tribou MJ. Multi-camera parallel tracking and mapping with non-overlapping fields of view. The International Journal of Robotics Research. 2015; 34 (12):1480-1500 - 8.
Seet G, Wei M, Han W. Efficient visual odometry estimation using stereo camera. In: 11th IEEE International Conference on Control and Automation; 2014. pp. 1399-1403 - 9.
Kanade T, Badino H, Yamamoto A. Visual odometry by multi-frame feature integration. In: IEEE International Conference on Computer Vision Workshops; December 2013. pp. 222-229 - 10.
Liang Z, Xu X. Improved visual odometry method for matching 3D point cloud data. In: 33rd Chinese Control Conference; July 2014; pp. 8474-8478 - 11.
Llorca DF, Parra I, Sotelo MA, Ocaña M. Robust visual odometry for vehicle localization in urban environments. Robotica. 2010; 28 :441-452 - 12.
Rendon-Mancha JM, Fuentes-Pacheco J, Ruiz-Ascencio J. Visual simultaneous localization and mapping: A survey. Artificial Intelligence Review. 2015; 43 (1):5581 - 13.
Neira J, Cisneros R, Lavion JE, Hernandez E, Ibarra JM. Visual slam with oriented landmarks and partial odometry. In: 21st International Conference on Electrical Communications and Computers; February–March 2011. pp. 39-45 - 14.
Hong S, Ye C. A pose graph based visual slam algorithm for robot pose estimation. In: World Automation Congress; August 2014. pp. 917-922 - 15.
Williams B, Reid I. On combining visual slam and visual odometry. In: IEEE International Conference on Robotics and Automation; May 2010. pp. 3494-3500 - 16.
Huitl R, Schroth G, Kranz M, Steinbach E, Hilsenbeck S, Moller A. Scale-preserving long-term visual odometry for indoor navigation. In: International Conference on Indoor Positioning and Indoor Navigation; November 2012. p. 110 - 17.
Martínez-García EA, Lerin E, Torres-Cordoba R. A multi-configuration kinematic model for active drive/steer four-wheel robot structures. Robotica. 2015; 33 :2309-2329. Cambridge University Press - 18.
Martinez-Garcia E, Mar A, Torres-Cordoba T. Dead-reckoning inverse and direct kinematic solution of a 4W independent driven rover. In: IEEE ANDESCON; 15–17 September 2010; Bogota, Colombia - 19.
Martinez-Garcia EA, Torres-Cordoba T. 4WD skid-steer trajectory control of a rover with spring-based suspension analysis. In: International Conference in Intelligent Robotics and Applications (ICIRA2010); 10–12 November 2010; Shanghai, China