Open access peer-reviewed chapter

Accurate and Robust Localization in Harsh Environments Based on V2I Communication

By Javier Prieto, Santiago Mazuelas, Alfonso Bahillo, Patricia Fernández, Rubén M. Lorenzo and Evaristo J. Abril

Submitted: April 24th 2012Reviewed: November 27th 2012Published: February 13th 2013

DOI: 10.5772/55263

Downloaded: 1391

1. Introduction

With the arrival of global navigation satellite systems (GNSS), in-car navigation has increasingly become an essential tool for the automotive industry. However, the performance of GNSS is compromised in harsh environments where there is not a line of sight (LOS) to satellites, e.g., tunnels, covered parking areas and dense urban canyons [1]. Hence, in-car navigation requires a localization technology that operates with robustness in such circumstances. The development of vehicular ad-hoc networks (VANETs) provides a promising platform to fulfill this requirement [2].

In VANETs, an on-board unit (OBU) inside the vehicle communicates with other OBUs or with stationary roadside units (RSUs), in vehicle-to-vehicle (V2V) and vehicle-to-infrastructure (V2I) communications, respectively [3]. Cooperation between OBUs can provide good position estimates in V2V communication [4-5]. However, the quick topology changes required by V2V approaches make V2I communication be the preferred option for in-car navigation in harsh environments [6]. In V2I communication, the position of an OBU (the target) can be estimated from range-related measurements taken on the radio-frequency signals transmitted to and from the RSUs (the anchors) [7]. However, the changeable and unpredictable characteristics of the wireless channel in harsh environments make multipath and non-line of sight (NLOS) propagation conditions be predominant [8-9]. Therefore, conventional positioning systems designed for tractable and static signal behavior cannot guarantee an adequate performance.

The position information extracted from the radio-frequency signals varies according to the type of measurement taken. Techniques based on time of arrival (TOA) [9-10] or received signal strength (RSS) [11-12] measurements obtain range-related information, whereas techniques based on angle of arrival (AOA) or time difference of arrival (TDOA) measurements extract information related to directions or difference of distances, respectively [13-14]. AOA and TDOA measurements entail significant costs of antenna-array integration or synchronizing devices. In this chapter, we focus on RSS and TOA measurements that can provide accurate localization with an appropriate complexity.[1] -

Range or position estimation is an inference problem where the observations are the RSS and TOA measurements [15-16]. From a Bayesian perspective, determining the posterior distribution of ranges or positions from observations is the optimal approach [17-25]. Then, ranges or positions can be obtained by means of the maximum a posteriori (MAP) or the minimum mean square error (MMSE) estimators.

The optimality of the above mentioned methods depends on the fit between the model assumed for the relationship between measurements and ranges or positions (i.e., the likelihood function) and the actual behavior of the measurements. Tractable and static models for the likelihoods based on Gaussian distributions accurately explain the behavior of measurements only in open areas [26-28]. For harsh environments, several techniques have been developed to address the complex behavior of wireless signal metrics. In the TOA case, the NLOS bias causes range overestimation. Thus, a common procedure is to detect and remove NLOS measurements [29]; other techniques utilize prior knowledge about this NLOS error to subtract it and adjust the measurements to their LOS values [27,30]. In the RSS case, the performance depends on the estimation of the parameters that characterize the propagation channel at each time [12,26]. Certain approaches deal with the dynamic nature of RSS metric through fingerprinting or machine learning [11,21,31]; however, their accuracy is sensitive to fast environmental changes and they do not fuse different signal metrics.

Range and position estimation can be improved by exploiting the relationship among positions in time through Bayesian filtering. Kalman filtering techniques rely on Gaussian models that are not adequate for harsh environments. Different alternative methods based on variations of such filters, as well as on particle filters (PFs), have been proposed: low complexity non-linear/non-parametric adaptive modeling is used for filtering of RSS fingerprints in [11,21]; recursive Bayesian estimation together with multipath and NLOS propagation effects are considered in [22-23]; TOA and RSS data fusion is performed in [32-34]; hybrid information is exploited by particle filtering in [24]; and RSS/TOA Bayesian fusion for multipath and NLOS mitigation are performed in [25]. However, these methods require prior information achieved by arduous training phases or rely on assumptions non-realistic for harsh environments, such as Gaussian and static models.

This chapter presents a framework for adaptive data fusion to handle the difficulties described above, based on non-parametric dynamic modeling of the likelihood. The subsequent usage of a PF leads to the adaptive likelihood particle (ALPA) filter. As we show, the estimation can be carried out without requiring any calibration stage, thus enabling localization capabilities to pre-existing wireless infrastructures, such as VANETs based on V2I communication. The main contributions of this chapter are as follows:

  • We present techniques for adaptive and systematic modeling of the relationship between measurements and positions, by means of a dynamic and empirical likelihood function.

  • We present a model for Bayesian fusion of TOA and RSS measurements, based on nonlinear and non-Gaussian Bayesian filtering and the likelihoods derived over time.

  • We show the suitability of the proposed techniques by experimentation performed using common wireless local area network (WLAN) devices.

  • We show the near-optimality of the method by comparing its performance to the posterior Cramér-Rao lower bound (CRLB).

Both empirical and simulation results show that the proposed methods significantly improve the accuracy of conventional approaches with an important reduction on the number of measurements needed.

The structure of the rest of this chapter is as follows: Section II defines the position estimation problem; Section III addresses this problem under a hidden Markov model (HMM) and defines the dynamic and measurements models; Section IV presents the adaptive data fusion technique for likelihood modeling and the recursive Bayesian approach for solving the resulting non-linear and non-Gaussian problem; Section V shows the experimental and simulation results; Section VI includes a discussion on complexity; and finally, Section VII draws the conclusions.

Notations: The notation pxis the probability density function (pdf) of the random variable x; fmdenotes the mth derivative of a real function fevaluated in its argument; f[k]for kNdenotes the value of the function fevaluated in tkR; Xkdenotes the set {xi,i=1,,k}; if Mis a positive integer, MMdenotes the MCartesian power of 1,,M; finally, z-denotes the sample mean of the components of a vector z.

2. Problem statement

In the following, we consider a two-dimensional scenario where a mobile target (e.g., a car equipped with an OBU) moves freely. To determine its position, the target communicates with several anchors (the RSUs) with known positions. Since the localization system can get measurements in discrete times  {tk,kN}, we are interested in estimating the sequence x[k],kNfrom a sequence of measurements  z[k],kN. The entries of vector xk can be the distances between the target and each anchor or the coordinates of the mobile target’s position. The entries of vector zk are RSS and TOA measurements.

Next section establishes the probabilistic relationship between vectors xk and zkby modeling this problem as an HMM, and defining the state vector,  yk, that consists of vector xkand several of its derivatives.

3. Hidden Markov model

In addition to the information conveyed by the measurements, the fact that the sequence  x[k],kNis highly correlated in time can likewise be used as another source of information. The position of the target cannot change abruptly in a small lapse of time; hence, we can model the evolution in time of positions or Euclidean distances to each anchor as an analytic function. Being x(t)a component of the position or the distance to an anchor, we can approximate its value in tk+1by using the nth-order Taylor expansion in  tk,

xk+1xk+x'kt+x''kt22++x(n)ktnn!E1

where t=(tk+1-tk)Ris the sampling interval. The error in this approximation is

xn+1ξ0tn+1n+1!

and ξ0Ris a point in the interval [tk,tk+1]. Therefore, the error in the approximation (1) depends directly on t, on the smoothness of x(t)(represented by the (n+1)th derivative), and on the order of the approximation.

The correlation in time expressed in (1) implies that x[k],kNis not a Markov chain, i.e., the current distance or position depends not only on the previous one. However, calling ykthe positional-state consisting of the distance or the position and its nderivatives, we can assume that y[k],kNis a Markov chain. Moreover, we can likewise assume that, conditioned on  y[k],kN, z[k],kNis a sequence of independent random variables, i.e., given the current positional-state vector, the measurements  zk are independent of all previous and future positional-states and measurements [18]. These assumptions let to build an HMM in which the positional-state vectors y[k],kNform a non-observable Markov chain, and what is available is the other stochastic process z[k],kN, linked to the Markov chain in that y[k]governs the distribution of z[k][35] (see Figure 1).

Figure 1.

Hidden Markov Model for positional-states and measurements evolution. The relationship between yk  and yk-1 and the relationship between zk  and yk  are the only two kinds of dependence.

The conditional independence assumptions reflected in Figure 1 lead to two kinds of dependence between the random variables [36],

  • Dynamic model: establishes the relationship between the state vector in time tkand the state vector in time tk-1, i.e., p(yk|yk-1).

  • Measurements model: establishes the relationship between the measurements and the state vector in each time, i.e., p(zk|yk).

Then, the joint distribution of all the random variables involved in the process is given by,[1] -

p(Y[k],Z[k])=p(y[1])p(z[1]|y[1])i=2kp(y[i]|y[i1])p(z[i]|y[i])=p(Y[k1],Z[k1])p(y[k]|y[k1])p(z[k]|y[k])E2

The modeling as an HMM shown in (2) makes possible to infer the posterior distribution pyk|Zkthrough a recursive process. In the specific case where the dynamic and measurements model are linear-Gaussian, the posterior distribution is also Gaussian, and the Bayesian inference can be optimally performed by the celebrated Kalman filter (KF) [19]. In the following, we describe these models for harsh propagation environments, showing that the dynamic model can be assumed linear with a wide generality, whereas this assumption for the measurements model yields inaccurate performances.

3.1. Dynamic model

The dynamic model of the positional-state vector can be obtained from the evolution in time given by (1), and by approximating each mth derivative, for m=1,,n, by its n-mth-order Taylor expansion, as

yk+1=Fkyk+ndkE3

where

Fk=1tt22tnn!01ttn-1(n-1)!001t001E4

is the transition matrix, and nd[k]is the error in the approximations. For example, in the case of estimating a one-dimensional parameter, x[k], the error nd[k]is given by

ndk=tn+1n+1!xn+1(ξ0)tnn!xn+1(ξ1)txn+1(ξn)E5

where ξ0,,ξnare values in the interval [tk,tk+1]. The values taken by the (n+1)th derivative of x(t)in the unknown points ξ0,,ξnare modeled as realizations of a random variable that can be assumed to be zero-mean Gaussian variable with a standard deviation σd(n+1)[18-19]. Then, we can model the evolution in time of x[k]as a random walk. Therefore, the dynamic model is a discrete Wiener process velocity (DWPV) model or a discrete Wiener process acceleration (DWPA) model if we use the second- or third-order Taylor expansion, respectively [18]. Hence, the dynamic model, pykyk-1, can be assumed linear-Gaussian.

3.2. Measurements model

The second ingredient to characterize the HMM is the measurements model or likelihood, pzkyk. This probability distribution relates the measurements to the positional-state. In the case of range-related measurements, we have that pzkyk=pzkdk, irrespectively of the positional-state used. In the following, we describe realistic models for the relationship between distances and RSS/TOA measurements in concordance with previous essays [10,12].

3.2.1. RSS measurements

In a given specific instant and place, the RSS values are affected by the distance between emitter and receiver. The attenuation caused by the distance between two nodes is known as path-loss and is proportional to this distance raised to a certain exponent, called path-loss exponent [7,12,15,26]. However, the RSS values are likewise affected by a wide range of unpredictable factors, such as multipath propagation (fast fading) and shadowing (slow fading) [37]. By reflecting these factors in the Friis transmission equation for free-space, the relationship between the received signal strength, Pr, and the distance, dk, is given by [26],

Pr=GtGr4πPtg2γ(d[k])βsE6

where Ptis the transmitted power, Gtand Grare the transmitter and receiver gains, respectively, gand γare the parameters of the Rayleigh/Rician and log-normal distributions that model the fast and slow fading, respectively, and βsis the path-loss exponent corresponding to the specific propagation environment [37].

By following the procedure described in [26] and taking logarithmic units, we obtain the measurements model for RSS values,

zsk=αs-10βslog10dk+ns[k]E7

where zskRis the RSS measured value and αsa constant that depends on Pt, Gt, Grand the fast and slow fading [12,15,26]. Finally, ns[k]is a noise term caused by shadowing that has zero mean in cases where the parameters αsand βsfit perfectly the current propagation conditions [12,15,26]. In practice, the value of αscan be previously known [26]. However, in realistic scenarios, the path-loss exponent,  βs, used to relate RSS values to distances, does not fit exactly the actual propagation conditions [12], and hence, the noise term, ns[k], will have a non-zero mean proportional to the logarithm of the distance.

3.2.2. TOA Measurements

The distance between emitter and receiver also affects the time taken by the signal to be propagated from one node to the other. By assuming known the signal speed, we can infer this distance by means of a linear transformation of the TOA values. Due to the technical difficulty of synchronizing devices in a wireless network, techniques that use round-trip time estimation are the most attractive to estimate delays [10,28]. In this case, the processing time at the device that has to transmit the echo causes the relationship between TOA and distance to be affine linear (it has an intercept term). Then, we can model the relationship between the delay, zτ[k], measured at time tk, and the distance at that time, d[k], as,

zτk=ατ+βτdk+nτ[k]E8

where ατand βτare constants that can be estimated by a linear regression of measurements previously obtained [28,38-39]. The term nτ[k]models the noise that is ussually assumed to be zero-mean and Gaussian in case of LOS propagation. However, in case of NLOS propagation, it is currently not known how to accurately model such error term, where several statistical distributions taking positive values, such as Exponential, Rayleigh, Weibull or Gamma, have been used in the literature [26-28].

From the above discussion, we can notice that in all cases the expected value of the measurements is Εz=fdk+b, where fis a linear or logarithmic function, and bis a systematic error in the model. In addition, we can point out that in harsh environments:

  1. the relationship between measurements related to distances and distances is nonlinear and non-Gaussian;

  2. such relationship highly depends on the propagation environment that can change rapidly.

These two factors render the linear-Gaussian assumption inadequate for the measurements model, pzkyk, in harsh environments. Therefore, common inference techniques that use naive and static models may obtain poor results in realistic dense cluttered scenarios.

4. Bayesian adaptive RSS/TOA fusion

Conventional non-Bayesian approaches for parameter estimation are based on maximum-likelihood (ML) estimation (in our case the maximization of pzkyk). ML commonly assumes tractable models for the likelihood (e.g., Gaussian likelihoods yield a least squares problem), while more intricate models are usually solved by means of expectation-maximization (EM) algorithm [40-41]. In the event that certain prior information about the parameter of interest is available, we can achieve a better estimator by adding this new information. If this prior information is the correlation in time of positional-states, it can be exploited through sequential Bayesian inference. In the following, we briefly describe such estimation process and present the adaptive likelihood particle (ALPA) filter for Bayesian inference based on RSS and TOA non-parametric adaptive likelihoods.

4.1. Bayesian inference

In the above mentioned context, the task is to determine the posterior distribution of positional-states given the measurements, Zk, from the knowledge of the prior, p(yk), and the likelihood, pzkyk, by using the Bayes’ rule [19,42]. The knowledge about the prior distribution, p(yk), can come from several avenues, e.g., from environmental knowledge. In this chapter, we use as prior knowledge the positional-states inferred in previous instants over the framework offered by the HMM above explained (see Figure 1). However, any other kind of prior information can be incorporated analogously.

In the case of modeling the positional-state and measurements evolution as an HMM, the expression (2) provides a way to determine the posterior distribution iteratively,

pY1Z1=py1,z1pz1=py1pz1y1pz1

and for k>1,

p(Y[k]|Z[k])=p(Y[k],Z[k])p(Z[k]) =p(z[k]|y[k])p(y[k]|y[k1])p(Y[k1],Z[k1])p(Z[k])=p(z[k]|y[k])p(y[k]|y[k1])p(Y[k1]|Z[k1])p(z[k]|z[k1])E9

From the posterior distribution, pYkZk, we can estimate ykby,

pykZk=pYkZkdYk-1E10

leading to a process called filtering.[1] - By replacing (9) in (10) we obtain,

pykZk=pzkykpykyk-1pYk-1|Zk-1dYk-1pzk|zk-1E11

By assuming known the posterior distribution at tk-1, pYk-1Zk-1, we can perform the filtering process in two steps [19],

  1. Prediction: from the dynamic model we obtain the prediction of the positional-state in timetk, given the measurements until time tk-1,

pykZk-1=pykyk-1pYk-1|Zk-1dYk-1E12
  1. Update: from the measurements model we correct the prediction when a new set of measurements, zk, is available in time tk,

pykZk=pzkykpykZk-1pzk|zk-1E13

and the normalization constant,

pzk|zk-1=pzkykpyk|Zk-1dykE14

Hence, the objective is to infer the hidden positional-state vector in each time, yk, by using the information achieved by the measurements and the relationship between the variables in time. The Bayesian recursive process given by (12) and (13) avoids the need of reprocessing all the stored data since the posterior distributions are obtained iteratively. Figure 2 graphically explains the evolution of the distributions involved in the filtering process, for the problem of estimating the range between the OBU and an RSU, and for the problem of estimating the position of the OBU when it communicates with three RSUs.

In order to perform the described filtering process, we need the likelihood function of the measurements pzkyk. This function is a priori unknown in harsh environments, since the distribution of the error term in the measurements model is highly environmental-dependent and varies rapidly with time. In the RSS case, although the error term is usually assumed to be zero-mean Gaussian distributed, this assumption is too naive in realistic scenarios where, for example, only one estimation of the path-loss exponent is available [12,26]. For TOA measurements, this error term has been modeled with several parametric distributions such as Gaussian, Exponential, Gamma or Rayleigh [26-27,43] or by means of specific distributions obtained in each particular propagation environment [28,44]. In the following sections, we propose an adaptive likelihood function for data fusion that dynamically adjusts to the changing propagation conditions from the nature of the measurements collected in real time.

Figure 2.

Density functions involved in filtering process for range and position estimation (darker zones have higher probability): (a) the target with the OBU moves in tk with respect to its position in tk-1; (b) the posterior density in tk-1 is known; (c) from the dynamic model we perform the prediction; (d) in tk the target receives a new set of measurements; (e) from the likelihood we update the prediction to obtain the posterior density in tk.

4.2. Adaptive likelihood for RSS/TOA fusion

The sets of RSS and TOA measurements obtained in each instant consist of samples from the random variable zskand zτk, respectively. As we show below, it is possible to represent the likelihood function in each instant and environment by using the set of samples through a non-parametric representation based on kernels [11,45-46].[1] - After the reception of MRSS or MTOA measurements {zki,i=1,,M}, we can approximate the pdf of zskor zτkas

p(z)1Mhi=1MK(z-zkih)E15

where K()is the kernel function and his a positive number called bandwidth [11,45-46]. Several functions can be chosen for the kernel, where the most common is to use the standard Gaussian kernel [47], i.e.,

Kx=12πe-12x2E16

By assuming that the distribution of the measurements z, has the expression (15) in time tk, we can obtain the likelihood relating distances to measurements in each instant kas the following result shows.

Proposition 1. Let zk={zki,i=1,,M}be a set of measurements (samples of z) related to the distance d[k]by a model  Εz=fdk+b. Then, assuming zfollows the distribution given by (15), and calling  ςi,j=zkj-zki+z[k]-, the likelihood function of the measurements is

pzkdk=12πM/2MhMi1,,iMMMΕb{exp(-12h2j=1M(ςi,j-f(dk)-b)2)}E17

where the expectation Εb{}is taken with respect to the systematic errors, b, in the model.

Proof: see [48].

The Proposition 1 enables to obtain individual likelihoods from a set of measurements. Data fusion from different signal metrics (i.e., RSS and TOA) is carried out by combining these likelihoods. Let zskand zτkbe sets of RSS and TOA measurements, respectively, forming the set of measurements obtained in the instant k. Then, assuming that, given the real distance, d[k], zskand zτkare independent, we have that,

pzkdk=pzskdkpzτkdkE18

where the likelihood of each kind of measurement can be dynamically obtained from (17).

In order to describe how the presented adaptive data fusion operates, Figures 3-4 show the histogram of 100 RSS and 100 TOA measurements taken at a fixed distance with the measuring systems described in [12] and [10], respectively. These figures also represent the corresponding Gaussian pdf and the adaptive pdf obtained by means of the kernel-based expression given by (15) and (16).[1] - From those figures, we can point out that, despite the fact that the true density is unknown, the presented adaptive pdf can express the dynamic behavior of RSS/TOA measurements in harsh environments with better accuracy than histogram and Gaussian density estimates [49-50].

Figure 3.

The adaptive density accurately approximates the complex randomness of RSS measurements in harsh environments.

Figure 4.

The adaptive density accurately approximates the complex randomness of TOA measurements in harsh environments.

In Figure 5 we illustrate the RSS/TOA data fusion process by representing the adaptive likelihood function obtained by means of expressions (17) and (18).[1] -

Figure 5.

The adaptive RSS/TOA data fusion, defined by Proposition 1 and (18), results, in this case, in an improvement of 0.5 meters in ML estimator compared with the Gaussian case, which is equivalent to a reduction of 18% of the error.

From Figure 5, we can point out that the adaptive likelihood function provides more information about the distance than the Gaussian model, by combining the individual adaptive likelihoods obtained with RSS and TOA measurements. Moreover, the height of both functions reflects the more reliable information obtained by adaptive estimation. From that figure, we also observe the improvement achieved by means of data fusion with respect to the individual estimates. This likelihood function leads to the ALPA filter defined in the following section.

4.3. Adaptive likelihood particle filter

Within the framework provided by the HMM, if both dynamic and measurements models are linear-Gaussian, all the posterior distributions are also Gaussian. In this case, all the involved density functions are completely described by their mean vectors and covariance matrices, obtained by a KF [19]. In the case of interest in this chapter, the models in the HMM are neither linear nor Gaussian, and then, the usage of KFs is suboptimal. In order to circumvent this drawback, the classical solution consists of using extended KFs (EKF) [23,25]. However, better performances can be obtained by PFs that let the usage of more general and flexible models [17,19] as the adaptive likelihood described in the previous section.

A PF represents the posterior distribution through a discrete distribution, where the support points and their probabilities are called particles and weights, respectively. To estimate the posterior distribution, we need to iteratively obtain a certain number of samples (particles) and probabilities (weights) capable of representing the posterior distribution. These particles and weights can be obtained by a method known as sequential-importance-sampling (SIS) [19,51], where the weight of the different particles can be determined by evaluating the likelihood function pointwise. Therefore, more realistic models such as the presented adaptive likelihood function for data fusion can be used, leading to the ALPA filtering algorithm describe in Table 1.

i. Initialization:
∙ Initial particles: drawNsamples{y1i,i=1,,N}from the known density functionp(y[1]).
∙ Initial weights:ω1i=1N,i=1,,N.
ii. Recursive estimation: fork>1,
∙ Particles in instantkfrom particles in instantk-1: drawNsamples{yki,i=1,,N}from the proposal distributionq(y[k]|yk-1i,z[k]).
∙ From RSS measurements and Proposition 1, evaluate the weight of each particle. Fori=1,,N
ω~si=p(zs[k]|yki)
∙ From TOA measurements and Proposition 1, evaluate the weight of each particle. Fori=1,,N
ω~τi=p(zτ[k]|yki)
∙ Evaluate fori=1,,N
ω~ki=ωk-1iω~siω~τip(yki|yk-1i)q(y[k]|yk-1i,z[k])
∙ Normalization: fori=1,,N, compute
ωki=ω~kij=1Nω~kj

Table 1.

ALPA filtering.

To implement the algorithm detailed in Table 1, we have to choose a proposal distribution, where the most popular choice is to use the transition prior given by the dynamic model, i.e., p(y[k]|y[k-1])[19]. This election leads to a rather simple expression for the weights,

ω~ki=ωk-1iω~siω~τiE19

Therefore, in order to use this algorithm, we have to obtain samples from the transition prior and evaluate the adaptive likelihood function pointwise. Figure 6 summarizes how this filter works with the proposal distribution chosen. First, we generate particles from the proposal distribution, in this case, the prior distribution, pykyk-1, and then, their weights are updated according to the likelihood function,  pzkyk. If the support of the proposal distribution does not cover the support of the likelihood function, only few particles will be in the region of importance, thus, the number of particles has to be increased in order to correctly approximate the posterior distribution.

Figure 6.

Transition prior and likelihood functions. Particles are obtained by sampling from the prior and weighting from the likelihood.

In this SIS algorithm, as kincreases, the variance of the weights ωkialso increases, and therefore, after a certain number of steps, all but one particle will have negligible normalized weights. This problem is known as degeneracy [19]. To overcome this drawback, it is mandatory to perform a resampling step when a severe degeneracy is detected. A measure of degeneracy is the effective sample size Neff, estimated as,

N^eff=1i=1N(ωki)2E20

where a small N^effindicates a severe degenerancy. Therefore, when degenerancy is detected, Nsamples with uniform weights are drawn from the discrete representation of the posterior, given by the previous particles and weights, yielding a variant of SIS algorithm called sampling-importance-resampling (SIR) algorithm [19,52].

5. Results

The goal of this section is to quantify the performance of the methods presented in the above sections, leading to the ALPA filter. In order to do that, we obtained experimental data in a real indoor scenario by using the systems described in [10] and [12], and we ran numerous Monte Carlo simulations. In the following, we compare the performance of the introduced techniques with conventional approaches as well as with the CRLB.

We use the dynamic and measurements models above described together with the following state vector and prior information, depending on whether we estimate ranges or positions,

  • Range estimation: we use a state vector yk=(dk,d'k,d''[k]). The standard deviation σd(3)is 1m/s3, which is roughly 50% of the maximum [18]. Furthermore, we add prior information about first and second derivatives of the distance, by considering they are distributed as Gaussians N(0,σd')and N(0,σd'')respectively, where σd'=0.5m/s and σd''=0.5m/s2.

  • Position estimation: we use a state vector yk=(xk,vk,a[k]), where xkconsists of the two-dimensional coordinates of the mobile target’s position, and vkand akare the velocity and the acceleration vectors. The same previous values for the deviations of the derivatives of the coordinates are used for dynamic and prior information.

For the experimental data, the target carried a laptop equipped with an IEEE 802.11b/g adapter and the measuring systems described in [10] and [12]. The anchors consisted of IEEE 802.11b/g access points (APs). In the RSS case, the anchors periodically sent beacon frames (at a frequency of MHz) and the RSS values were obtained based on the RSS indicator at target’s adapter [12]. In the TOA case, the mobile target periodically sent request-to-send frames to each anchor (at a frequency of MHz), and a counter connected to the WLAN adapter saved the clock-cycles elapsed between the request and the reception of the corresponding clear-to-send frame [10]. For the results presented in this section, we refer as fusion the results of combining RSS and TOA data at every time-step.

5.1. Experimental results

As mentioned above, in a realistic scenario, NLOS propagation together with multipath effects constitute the major drawback of localization in harsh environments. This section illustrates the behavior of the proposed algorithm during a typical path followed by a mobile target in an indoor scenario. We carried out a measurement campaign inside an office building cluttered with clusters of objects and people moving freely in the area of the measurements. The propagation conditions were even harsher than the ones commonly find by an OBU placed within a car. Figure 8 shows the trajectory of 65 meters as well as the position of the 4 APs. It took 100 seconds to complete the whole trajectory, receiving a new set of measurements every second (t=1s) from all the APs. As reflected in Figure 8, NLOS was always present when measuring with respect to AP3 and AP4, and only in a small percentage of positions there was a LOS between target and anchors AP1 and AP2.

In Table 2, we compare the error achieved with the proposed ALPA range estimation method in the presented scenario to the error obtained with conventional approaches [15,24]. We specify the results for RSS-only and TOA-only cases, and for their fusion. Specifically, we call,

  • ML-RSS, ML-TOA, ML-Fusion: the range estimates obtained by means of the ML estimator. We utilize as likelihood function the convolution of the likelihood reported by the measurements (log-normal in the RSS case and Gaussian in the TOA case) and a Gaussian distribution corresponding to the bias.[1] - The likelihood for the fusion is computed from (18).

  • AML-RSS, AML-TOA, AML-Fusion: the ranges that correspond to the result of obtaining the maximum of the adaptive likelihood computed by means of Proposition 1, and (18) in the fusion case.

  • EKF-RSS, KF-TOA, EKF-Fusion: the result of applying EKF and KF filters for RSS and TOA measurements, respectively, using the same bias distributions as in the ML case, and the dynamic model given by (3).

  • ALPA-RSS, ALPA-TOA, ALPA-Fusion: the range estimates obtained by the ALPA filtering described in Table 1, where N=10 000is the number of particles used.

We summarize for all these methods the quartiles of the absolute error in range estimates as well as the root mean squared error (RMSE), which incorporates both systematic (bias) and random errors. In order to study the influence of the number of measurements, M, in the final performance, all these statistics are shown for four different values.

Figure 7 depicts the pdf of the absolute error in range estimation after applying AML-Fusion and ALPA-Fusion methods, taking 10RSS and 10 TOA measurements in each one of the positions of the target with respect to the four APs. Figure 7 likewise includes the ML-Fusion and EKF-Fusion methods in order to compare their behavior. Using only 10measurements, ML-, AML-, EKF- and ALPA-Fusion obtain an error in range estimation lower than 3meters for 55%, 65%, 73%, and 80% of the positions, respectively, which reflects the remarkable performance of the proposed algorithm.

M=5M=10M=50M=100
QuartilesRMSEQuartilesRMSEQuartilesRMSEQuartilesRMSE
ML-RSS1.64-3.12-5.457.011.28-2.94-4.965.321.36-2.72-4.684.341.27-2.74-4.744.58
ML-TOA2.09-3.92-7.686.421.55-3.40-5.645.001.26-2.69-4.403.871.12-2.44-4.003.55
ML-Fusion1.52-3.16-5.875.251.26-2.66-4.734.231.09-2.24-3.893.550.87-2.18-3.613.26
AML-RSS1.69-3.25-5.275.641.44-2.92-5.064.711.32-2.74-4.644.271.31-2.70-4.504.20
AML-TOA2.06-3.74-7.386.281.52-3.31-5.574.931.18-2.61-4.273.811.03-2.38-3.863.48
AML-Fusion1.38-2.91-5.194.491.15-2.32-3.653.490.86-1.91-3.393.060.83-1.83-3.262.91
EKF-RSS0.84-2.22-4.263.821.06-2.59-4.213.811.21-2.43-4.073.761.17-2.55-4.043.69
KF-TOA1.11-2.37-3.953.601.10-2.06-3.633.040.81-1.76-2.972.530.86-1.63-2.952.36
EKF-Fusion0.93-1.90-3.242.780.86-1.82-3.152.590.82-1.62-2.622.250.74-1.49-2.552.10
ALPA-RSS0.82-2.33-4.633.881.17-2.58-4.303.791.20-2.48-4.183.751.21-2.64-4.173.78
ALPA-TOA0.94-2.04-3.333.110.95-1.90-3.062.690.72-1.48-2.632.520.76-1.50-2.642.32
ALPA-Fusion0.84-1.72-2.952.580.80-1.70-2.852.350.69-1.37-2.362.220.70-1.45-2.402.08

Table 2.

Range estimation error quartiles and RMSE obtained with different algorithms as a function of the number of measurements. All error values are in meters.

Analogously, in Figures 8-9 and Table 3, we summarize the results in position estimation. In this case, we call,[1] -

  • ML-RSS, ML-TOA, ML-Fusion: the positions obtained with the ML distances and a trilateration technique based on the radical axis of the circles drawn at each anchor’s position [10,12-13].

  • EKF-RSS, EKF-TOA, EKF-Fusion: the positions obtained by means of an EKF whose measurements model relates the measurements to the target’s position.

  • PF-RSS, PF-TOA, PF-Fusion: the result of applying the ALPA filter described in Table 1 to the positional-states, with N=10 000particles.

Figure 7.

The height and width of the pdf corresponding to the error achieved by the ALPA filter reflect its better performance in comparison to other conventional range estimation techniques. 10 RSS and 10 TOA measurements were taken with respect to each anchor.

M=5M=10M=50M=100
QuartilesRMSEQuartilesRMSEQuartilesRMSEQuartilesRMSE
ML-RSS3.83-5.91-8.4912.993.32-5.21-7.498.913.35-4.94-6.986.643.26-5.00-6.607.43
ML-TOA3.95-6.14-8.037.642.80-4.05-6.645.702.24-3.34-5.164.571.63-3.20-4.714.09
ML-Fusion3.15-4.95-7.046.732.40-3.71-6.115.101.93-3.03-4.934.341.64-3.03-4.533.89
EKF-RSS2.94-4.46-6.185.113.47-4.83-6.855.833.02-4.12-6.335.243.02-4.24-6.405.24
KF-TOA1.77-2.79-4.253.542.05-2.80-3.643.111.54-2.28-3.092.611.50-2.22-3.142.51
EKF-Fusion2.20-3.24-4.303.502.08-2.99-3.903.251.76-2.32-3.002.571.71-2.13-2.982.41
ALPA-RSS1.93-3.28-5.184.363.16-3.91-5.184.682.38-3.09-4.614.232.72-3.65-4.974.37
ALPA-TOA1.90-2.59-3.763.371.63-2.54-3.632.981.08-1.98-3.252.661.35-2.18-3.052.63
ALPA-Fusion1.77-2.86-3.463.141.92-2.61-3.342.821.23-1.85-3.152.491.28-2.00-2.642.40

Table 3.

Position estimation error quartiles and RMSE obtained with several algorithms as a function of the number of measurements. All error values are in meters.

Figure 8.

Trajectory followed by the target and position estimates for different positioning methods. 10 RSS and 10 TOA measurements were taken with respect to each anchor.

Figure 9 depicts the pdf of the error in position estimation for the three mentioned RSS/TOA fusion algorithms, taking 10RSS and 10 TOA measurements in each one of the positions of the target with respect to the four APs. Using only 10measurements, ML-, EKF- and ALPA-Fusion obtain an error in position estimation lower than 3meters for 40%, 52%, and 63% of the positions, respectively.

Figure 9.

The proposed ALPA filter obtains the best performance with an error lower than 3 meters for more than 63% of the positions.

Figures 8-9 and Table 3 show the better performance of the proposed ALPA filter for all the analyzed scenarios, resulting, for example, in an RMSE of 2.82 meters for the case of only using 10RSS and 10TOA measurements, while previous essays obtained RMSEs around 4 meters by using hundreds of measurements [12,28].

5.2. Simulation results

The CRLB provides a lower bound on the minimum achievable mean squared estimation error for any unbiased estimator. In what follows, we use such metric to assess the optimality of the presented ALPA filter against such lower bound.

The Bayesian version of the CRLB is known as the Van Tress CRLB [53], or posterior CRLB, since it is obtained from the posterior distributions of the random state vector [54]. In our case, for each time instant k, the CRLB is,

Ε{(g(Zk)-y[k])(g(Zk)-y[k])T}Jk-1E21

where g(Zk) is an unbiased estimator of y[k]and Jkis the Fisher information matrix (FIM) obtained as,

Jk=-Ε{yk[yklogpZkykT]}E22

Tichavský et al. proposed a recursive formula to compute the FIM [55]. For the particular case of the linear-Gaussian dynamic model in (3), being Qkthe covariance matrix in this model, the FIM is given by the recursion [19],

Jk+1=Jk+1z+(Qk+FkJk-1FkT)-1E23

and

Jk+1z=-Ε{yk+1[yk+1logpzk+1yk+1T]E24

To start this recursion, we assume the initial density as Gaussian, then, the initial FIM coincides with its covariance matrix.

Figure 10 compares the RMSE obtained in range estimation by means of the proposed ALPA-Fusion filter with the RMSE obtained by applying the EKF-Fusion method, and with the square root of the CRLB.[1] - To obtain such curves, we simulated a trajectory of 85 positions and carried out 1 000Monte Carlo experiments. Figure 10 again corroborates the remarkable performance of ALPA filter, since the corresponding curve is much closer to the CRLB than the line corresponding to the EKF error.

Figure 10.

The near-optimal performance of the proposed ALPA filter in harsh environments is corroborated by comparison with the CRLB.

6. Complexity

The key issue in PFs is the exponential growth of computational complexity as a function of the dimension of the state vector, y[k], whereas EKF grows as the cube of the dimension [56]. For low dimensional problems, PF remains similar to an EKF, however, for high dimensional problems, PFs suffer from the curse of dimensionality [57]. Then, PFs that track ranges instead of positions can be advantageous from a complexity point of view.

Moreover, from Proposition 1, the complexity of the likelihood grows exponentially with the number of samples. However, this complexity can be reduced by removing redundant components from the RSS and TOA pdfs or from the resulting fusion mixture. To this aim, different criteria such as William‘s criterion [58], Kullback-Leibler distance [59] or clustering [60] can be utilized. Therefore, considering the improvement achieved in range and position estimation with 5and 10measurements, the proposed ALPA filter could be a good choice for the designing of VANETs that require low consumption. In these cases, in order to save battery, the OBUs transmit only at discrete intervals; therefore, there is more time available for processing a smaller number of samples.

7. Conclusions

In this chapter we have presented an adaptive likelihood function for robust data fusion in localization systems. Based on this likelihood, we have developed the ALPA filter for range and position estimation. This ALPA filter presents several advantages over conventional techniques,

  1. it does not assume any parametric statistical model, utilizing the empirical distribution of the measurements at each time by means of Gaussian kernels;

  2. it adaptively fuses RSS and TOA data being extensible to any other type of measurement;

  3. it takes advantage of the relationship among positions in time by using Bayesian filtering;

  4. it addresses the non-linear and non-Gaussian behavior of the measurements by using particle filtering.

These advantages result in a noticeable improvement with respect to other conventional techniques, as corroborated by the experimental and simulation results. Under NLOS and multipath conditions, ALPA filter obtains not only an RMSE in position estimation lower than 3 meters with only 10 RSS and 10 TOA measurements, but also an error remarkably close to the theoretical benchmark provided by the CRLB.

Therefore, ALPA filter is a valuable choice to provide localization in V2I communication systems. Its extension to cooperative localization would make this localization also possible in VANETs based on V2V communication.

Notes

  • In the TOA case, measuring the round-trip time avoids the technical difficulty of time synchronization among the nodes.
  • This probabilistic model is a generalization of the maximum likelihood approach in which the estimation is accomplished for a given time instant, tk, neither considering previous nor future positional-states and measurements. In this case, p(yk,zk)∝p(zk|yk)
  • The positional-state yk can likewise be estimated by using the measurements until time tk+l, leading to a process called smoothing if l>0 or prediction if l<0.
  • A kernel function is a symmetric function (not necessarily positive) whose integral over the entire space is equal to one.
  • In Figures 3-4 and in the following, we use a fixed bandwidth of one-half of the resolution of the measuring system [10,12]. This election avoids both undersmoothed curves with too much spurious data artifacts, and oversmoothed densities that obscure the underlying nature of the measurements [46].
  • In Figure 5 and in the following sections, we use coarse models for the measurements biases in accordance with previous essays [10,12]. Specifically, the RSS bias is modeled as a Gaussian N(0,σs) with σs=3 dBm, and the TOA bias as a Uniform distribution U(0,γτ) with γτ=4 clock cycles.
  • In order to guarantee a fair comparison, in Table 2 and in the following experiments, we select the values for the biases in accordance to the ones selected in Section 4. In this way, the RSS bias is modeled as a Gaussian N(0,σs) with σs=3 dBm, and the TOA bias as a Gaussian N(γτ/2,γτ/4) with γτ=4 clock cycles.
  • For the results of Figures 8-9 and Table 3, EKF and ALPA filters use a measurements model that directly relates measurements with positions, avoiding the intermediate step of estimating distances and, therefore, removing the trilateration stage.
  • We selected a truncated normal distribution as random error to reflect the limited range of the measuring systems. For the proposed adaptive likelihoods, Jk+1z has no closed-form, then, it was evaluated by Monte Carlo integration.

© 2013 The Author(s). Licensee IntechOpen. This chapter is distributed under the terms of the Creative Commons Attribution 3.0 License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

How to cite and reference

Link to this chapter Copy to clipboard

Cite this chapter Copy to clipboard

Javier Prieto, Santiago Mazuelas, Alfonso Bahillo, Patricia Fernández, Rubén M. Lorenzo and Evaristo J. Abril (February 13th 2013). Accurate and Robust Localization in Harsh Environments Based on V2I Communication, Vehicular Technologies - Deployment and Applications, Lorenzo Galati Giordano and Luca Reggiani, IntechOpen, DOI: 10.5772/55263. Available from:

chapter statistics

1391total chapter downloads

8Crossref citations

More statistics for editors and authors

Login to your personal dashboard for more detailed statistics on your publications.

Access personal reporting

Related Content

This Book

Next chapter

Simulation Platform for Performance Analysis of Cooperative Eigenvalue Spectrum Sensing with a Realistic Receiver Model Under Impulsive Noise

By Rausley Adriano Amaral de Souza, Dayan Adionel Guimarães and André Antônio dos Anjos

Related Book

First chapter

Experimental Test of a Diesel Engine Using Envo-Diesel as an Alternative Fuel

By M.A.Kalam and H.H. Masjuki

We are IntechOpen, the world's leading publisher of Open Access books. Built by scientists, for scientists. Our readership spans scientists, professors, researchers, librarians, and students, as well as business professionals. We share our knowledge and peer-reveiwed research papers with libraries, scientific and engineering societies, and also work with corporate R&D departments and government entities.

More About Us