Accurate and Robust Localization in Harsh Environments Based on V2I Communication

With the arrival of global navigation satellite systems (GNSS), in-car navigation has increas‐ ingly become an essential tool for the automotive industry. However, the performance of GNSS is compromised in harsh environments where there is not a line of sight (LOS) to satellites, e.g., tunnels, covered parking areas and dense urban canyons [1]. Hence, in-car navigation requires a localization technology that operates with robustness in such circumstances. The development of vehicular ad-hoc networks (VANETs) provides a promising platform to fulfill this requirement [2].

strength (RSS) [11][12] measurements obtain range-related information, whereas techniques based on angle of arrival (AOA) or time difference of arrival (TDOA) measurements extract information related to directions or difference of distances, respectively [13][14]. AOA and TDOA measurements entail significant costs of antenna-array integration or synchronizing devices. In this chapter, we focus on RSS and TOA measurements that can provide accurate localization with an appropriate complexity. 1 Range or position estimation is an inference problem where the observations are the RSS and TOA measurements [15][16]. From a Bayesian perspective, determining the posterior distribution of ranges or positions from observations is the optimal approach [17][18][19][20][21][22][23][24][25]. Then, ranges or positions can be obtained by means of the maximum a posteriori (MAP) or the minimum mean square error (MMSE) estimators.
The optimality of the above mentioned methods depends on the fit between the model assumed for the relationship between measurements and ranges or positions (i.e., the likelihood function) and the actual behavior of the measurements. Tractable and static models for the likelihoods based on Gaussian distributions accurately explain the behavior of measurements only in open areas [26][27][28]. For harsh environments, several techniques have been developed to address the complex behavior of wireless signal metrics. In the TOA case, the NLOS bias causes range overestimation. Thus, a common procedure is to detect and remove NLOS measurements [29]; other techniques utilize prior knowledge about this NLOS error to subtract it and adjust the measurements to their LOS values [27,30]. In the RSS case, the performance depends on the estimation of the parameters that characterize the propagation channel at each time [12,26]. Certain approaches deal with the dynamic nature of RSS metric through fingerprinting or machine learning [11,21,31]; however, their accuracy is sensitive to fast environmental changes and they do not fuse different signal metrics.
Range and position estimation can be improved by exploiting the relationship among positions in time through Bayesian filtering. Kalman filtering techniques rely on Gaussian models that are not adequate for harsh environments. Different alternative methods based on variations of such filters, as well as on particle filters (PFs), have been proposed: low complexity nonlinear/non-parametric adaptive modeling is used for filtering of RSS fingerprints in [11,21]; recursive Bayesian estimation together with multipath and NLOS propagation effects are considered in [22][23]; TOA and RSS data fusion is performed in [32][33][34]; hybrid information is exploited by particle filtering in [24]; and RSS/TOA Bayesian fusion for multipath and NLOS mitigation are performed in [25]. However, these methods require prior information achieved by arduous training phases or rely on assumptions non-realistic for harsh environments, such as Gaussian and static models.
This chapter presents a framework for adaptive data fusion to handle the difficulties described above, based on non-parametric dynamic modeling of the likelihood. The subsequent usage of a PF leads to the adaptive likelihood particle (ALPA) filter. As we show, the estimation can be carried out without requiring any calibration stage, thus enabling localization capabilities

Hidden Markov model
In addition to the information conveyed by the measurements, the fact that the sequence { x k , k ∈ N } is highly correlated in time can likewise be used as another source of information.
The position of the target cannot change abruptly in a small lapse of time; hence, we can model the evolution in time of positions or Euclidean distances to each anchor as an analytic function. Being x(t) a component of the position or the distance to an anchor, we can approximate its value in t k +1 by using the nth-order Taylor expansion in t k , where ∆ t = (t k +1 -t k )∈R is the sampling interval. The error in this approximation is ∆ t n+1 (n + 1) ! and ξ 0 ∈ R is a point in the interval t k , t k +1 . Therefore, the error in the approximation (1) depends directly on ∆ t, on the smoothness of x(t) (represented by the (n + 1)th derivative), and on the order of the approximation.
The correlation in time expressed in (1) implies that { x k , k ∈ N } is not a Markov chain, i.e., the current distance or position depends not only on the previous one. However, calling y k the positional-state consisting of the distance or the position and its n derivatives, we can assume that { y k , k ∈ N } is a Markov chain. Moreover, we can likewise assume that, conditioned on { y k , k ∈ N } , { z k , k ∈ N } is a sequence of independent random variables, i.e., given the current positional-state vector, the measurements z k are independent of all previous and future positional-states and measurements [18]. These assumptions let to build an HMM in which the positional-state vectors { y k , k ∈ N } form a non-observable Markov chain, and what is available is the other stochastic process { z k , k ∈ N } , linked to the Markov chain in that y k governs the distribution of z k [35] (see Figure 1). The conditional independence assumptions reflected in Figure 1 lead to two kinds of dependence between the random variables [36], • Dynamic model: establishes the relationship between the state vector in time t k and the state vector in time t k -1 , i.e., p ( y k | y k -1 ) .
• Measurements model: establishes the relationship between the measurements and the state vector in each time, i.e., p ( z k | y k ) .
Then, the joint distribution of all the random variables involved in the process is given by, 2 The modeling as an HMM shown in (2) makes possible to infer the posterior distribution p ( y k | Z k ) through a recursive process. In the specific case where the dynamic and measurements model are linear-Gaussian, the posterior distribution is also Gaussian, and the Bayesian inference can be optimally performed by the celebrated Kalman filter (KF) [19]. In the following, we describe these models for harsh propagation environments, showing that the dynamic model can be assumed linear with a wide generality, whereas this assumption for the measurements model yields inaccurate performances.

Dynamic model
The dynamic model of the positional-state vector can be obtained from the evolution in time given by (1), and by approximating each mth derivative, for m = 1, … , n, by its (n -m)th-order Taylor expansion, as where is the transition matrix, and n d k is the error in the approximations. For example, in the case of estimating a one-dimensional parameter, x k , the error n d k is given by where ξ 0 , … , ξ n are values in the interval t k , t k +1 . The values taken by the (n + 1) th derivative of x(t) in the unknown points ξ 0 , … , ξ n are modeled as realizations of a random variable that can be assumed to be zero-mean Gaussian variable with a standard deviation σ d (n+1) [18][19].
Then, we can model the evolution in time of x k as a random walk. Therefore, the dynamic model is a discrete Wiener process velocity (DWPV) model or a discrete Wiener process acceleration (DWPA) model if we use the second-or third-order Taylor expansion, respectively [18]. Hence, the dynamic model, p ( y k | y k -1 ) , can be assumed linear-Gaussian.

Measurements model
The second ingredient to characterize the HMM is the measurements model or likelihood, p ( z k | y k ) . This probability distribution relates the measurements to the positional-state.
In the case of range-related measurements, we have that irrespectively of the positional-state used. In the following, we describe realistic models for the relationship between distances and RSS/TOA measurements in concordance with previous essays [10,12].

RSS measurements
In a given specific instant and place, the RSS values are affected by the distance between emitter and receiver. The attenuation caused by the distance between two nodes is known as pathloss and is proportional to this distance raised to a certain exponent, called path-loss exponent [7,12,15,26]. However, the RSS values are likewise affected by a wide range of unpredictable factors, such as multipath propagation (fast fading) and shadowing (slow fading) [37]. By reflecting these factors in the Friis transmission equation for free-space, the relationship between the received signal strength, P r , and the distance, d k , is given by [26], where P t is the transmitted power, G t and G r are the transmitter and receiver gains, respectively, g and γ are the parameters of the Rayleigh/Rician and log-normal distributions that model the fast and slow fading, respectively, and β s is the path-loss exponent corresponding to the specific propagation environment [37].
By following the procedure described in [26] and taking logarithmic units, we obtain the measurements model for RSS values, z s k = α s -10β s log 10 ( d k ) + n s k where z s k ∈ R is the RSS measured value and α s a constant that depends on P t , G t , G r and the fast and slow fading [12,15,26]. Finally, n s k is a noise term caused by shadowing that has zero mean in cases where the parameters α s and β s fit perfectly the current propagation conditions [12,15,26]. In practice, the value of α s can be previously known [26]. However, in realistic scenarios, the path-loss exponent, β s , used to relate RSS values to distances, does not fit exactly the actual propagation conditions [12], and hence, the noise term, n s k , will have a non-zero mean proportional to the logarithm of the distance.

TOA Measurements
The distance between emitter and receiver also affects the time taken by the signal to be propagated from one node to the other. By assuming known the signal speed, we can infer this distance by means of a linear transformation of the TOA values. Due to the technical difficulty of synchronizing devices in a wireless network, techniques that use round-trip time estimation are the most attractive to estimate delays [10,28]. In this case, the processing time at the device that has to transmit the echo causes the relationship between TOA and distance to be affine linear (it has an intercept term). Then, we can model the relationship between the delay, z τ k , measured at time t k , and the distance at that time, d k , as, where α τ and β τ are constants that can be estimated by a linear regression of measurements previously obtained [28,[38][39]. The term n τ k models the noise that is ussually assumed to be zero-mean and Gaussian in case of LOS propagation. However, in case of NLOS propagation, it is currently not known how to accurately model such error term, where several statistical distributions taking positive values, such as Exponential, Rayleigh, Weibull or Gamma, have been used in the literature [26][27][28].
From the above discussion, we can notice that in all cases the expected value of the measure- where f is a linear or logarithmic function, and b is a systematic error in the model. In addition, we can point out that in harsh environments:

the relationship between measurements related to distances and distances is nonlinear and non-Gaussian;
2. such relationship highly depends on the propagation environment that can change rapidly.
These two factors render the linear-Gaussian assumption inadequate for the measurements model, p ( z k | y k ) , in harsh environments. Therefore, common inference techniques that use naive and static models may obtain poor results in realistic dense cluttered scenarios.

Bayesian adaptive RSS/TOA fusion
Conventional non-Bayesian approaches for parameter estimation are based on maximumlikelihood (ML) estimation (in our case the maximization of p ( z k | y k ) ). ML commonly assumes tractable models for the likelihood (e.g., Gaussian likelihoods yield a least squares problem), while more intricate models are usually solved by means of expectation-maximization (EM) algorithm [40][41]. In the event that certain prior information about the parameter of interest is available, we can achieve a better estimator by adding this new information. If this prior information is the correlation in time of positional-states, it can be exploited through sequential Bayesian inference. In the following, we briefly describe such estimation process and present the adaptive likelihood particle (ALPA) filter for Bayesian inference based on RSS and TOA non-parametric adaptive likelihoods.

Bayesian inference
In the above mentioned context, the task is to determine the posterior distribution of positionalstates given the measurements, Z k , from the knowledge of the prior, p ( y k ) , and the likelihood, p ( z k | y k ) , by using the Bayes' rule [19,42]. The knowledge about the prior distribution, p ( y k ) , can come from several avenues, e.g., from environmental knowledge. In this chapter, we use as prior knowledge the positional-states inferred in previous instants over the framework offered by the HMM above explained (see Figure 1). However, any other kind of prior information can be incorporated analogously.
In the case of modeling the positional-state and measurements evolution as an HMM, the expression (2) provides a way to determine the posterior distribution iteratively, and for k > 1, From the posterior distribution, p ( Y k | Z k ) , we can estimate y k by, leading to a process called filtering. 3 By replacing (9) in (10) we obtain, Vehicular Networking Technologies By assuming known the posterior distribution at t k -1 , p ( Y k -1 | Z k -1 ) , we can perform the filtering process in two steps [19],

1.
Prediction: from the dynamic model we obtain the prediction of the positional-state in time t k , given the measurements until time t k -1 , 2. Update: from the measurements model we correct the prediction when a new set of and the normalization constant, Hence, the objective is to infer the hidden positional-state vector in each time, y k , by using the information achieved by the measurements and the relationship between the variables in time.
The Bayesian recursive process given by (12) and (13) avoids the need of reprocessing all the stored data since the posterior distributions are obtained iteratively. Figure 2 graphically explains the evolution of the distributions involved in the filtering process, for the problem of estimating the range between the OBU and an RSU, and for the problem of estimating the position of the OBU when it communicates with three RSUs.
In order to perform the described filtering process, we need the likelihood function of the measurements p ( z k | y k ) . This function is a priori unknown in harsh environments, since the distribution of the error term in the measurements model is highly environmental-dependent and varies rapidly with time. In the RSS case, although the error term is usually assumed to be zero-mean Gaussian distributed, this assumption is too naive in realistic scenarios where, for example, only one estimation of the path-loss exponent is available [12,26]. For TOA measurements, this error term has been modeled with several parametric distributions such as Gaussian, Exponential, Gamma or Rayleigh [26][27]43] or by means of specific distributions obtained in each particular propagation environment [28,44]. In the following sections, we propose an adaptive likelihood function for data fusion that dynamically adjusts to the changing propagation conditions from the nature of the measurements collected in real time.

Adaptive likelihood for RSS/TOA fusion
The sets of RSS and TOA measurements obtained in each instant consist of samples from the random variable z s k and z τ k , respectively. As we show below, it is possible to represent the likelihood function in each instant and environment by using the set of samples through a non-parametric representation based on kernels [11,[45][46]. 4 After the reception of M RSS or M TOA measurements {z k i , i = 1, … , M }, we can approximate the pdf of z s k or z τ k as where K ( • ) is the kernel function and h is a positive number called bandwidth [11,[45][46]. Several functions can be chosen for the kernel, where the most common is to use the standard Gaussian kernel [47], i.e., By assuming that the distribution of the measurements z, has the expression (15) in time t k , we can obtain the likelihood relating distances to measurements in each instant k as the following result shows.
Then, assuming z follows the distribution given by (15), and calling ς i, j = z k j -z k i + z k , the likelihood function of the measurements is (17) where the expectation Ε b { • } is taken with respect to the systematic errors, b, in the model.
The Proposition 1 enables to obtain individual likelihoods from a set of measurements. Data fusion from different signal metrics (i.e., RSS and TOA) is carried out by combining these likelihoods. Let z s k and z τ k be sets of RSS and TOA measurements, respectively, forming the set of measurements obtained in the instant k. Then, assuming that, given the real distance, d k , z s k and z τ k are independent, we have that, where the likelihood of each kind of measurement can be dynamically obtained from (17).
In order to describe how the presented adaptive data fusion operates, Figures 3-4 show the histogram of 100 RSS and 100 TOA measurements taken at a fixed distance with the measuring systems described in [12] and [10], respectively. These figures also represent the corresponding Gaussian pdf and the adaptive pdf obtained by means of the kernel-based expression given by (15) and (16). 5 From those figures, we can point out that, despite the fact that the true density is unknown, the presented adaptive pdf can express the dynamic behavior of RSS/TOA measurements in harsh environments with better accuracy than histogram and Gaussian density estimates [49][50].  Figures 3-4 and in the following, we use a fixed bandwidth of one-half of the resolution of the measuring system [10,12]. This election avoids both undersmoothed curves with too much spurious data artifacts, and oversmoothed densities that obscure the underlying nature of the measurements [46]. In Figure 5 we illustrate the RSS/TOA data fusion process by representing the adaptive likelihood function obtained by means of expressions (17) (18), results, in this case, in an improvement of 0.5 meters in ML estimator compared with the Gaussian case, which is equivalent to a reduction of 18% of the error.

In
From Figure 5, we can point out that the adaptive likelihood function provides more information about the distance than the Gaussian model, by combining the individual adaptive likelihoods obtained with RSS and TOA measurements. Moreover, the height of both functions reflects the more reliable information obtained by adaptive estimation. From that figure, we also observe the improvement achieved by means of data fusion with respect to the individual estimates. This likelihood function leads to the ALPA filter defined in the following section.

Adaptive likelihood particle filter
Within the framework provided by the HMM, if both dynamic and measurements models are linear-Gaussian, all the posterior distributions are also Gaussian. In this case, all the involved density functions are completely described by their mean vectors and covariance matrices, obtained by a KF [19]. In the case of interest in this chapter, the models in the HMM are neither linear nor Gaussian, and then, the usage of KFs is suboptimal. In order to circumvent this drawback, the classical solution consists of using extended KFs (EKF) [23,25]. However, better performances can be obtained by PFs that let the usage of more general and flexible models [17,19] as the adaptive likelihood described in the previous section.
A PF represents the posterior distribution through a discrete distribution, where the support points and their probabilities are called particles and weights, respectively. To estimate the posterior distribution, we need to iteratively obtain a certain number of samples (particles) and probabilities (weights) capable of representing the posterior distribution. These particles and weights can be obtained by a method known as sequential-importance-sampling (SIS) [19,51], where the weight of the different particles can be determined by evaluating the likelihood function pointwise. Therefore, more realistic models such as the presented adaptive likelihood function for data fusion can be used, leading to the ALPA filtering algorithm describe in Table 1.
• Normalization: fori = 1, … , N , compute To implement the algorithm detailed in Table 1, we have to choose a proposal distribution, where the most popular choice is to use the transition prior given by the dynamic model, i.e., p ( y k | y k -1 ) [19]. This election leads to a rather simple expression for the weights, Therefore, in order to use this algorithm, we have to obtain samples from the transition prior and evaluate the adaptive likelihood function pointwise. Figure 6 summarizes how this filter works with the proposal distribution chosen. First, we generate particles from the proposal distribution, in this case, the prior distribution, p ( y k | y k -1 ) , and then, their weights are updated according to the likelihood function, p ( z k | y k ) . If the support of the proposal distribution does not cover the support of the likelihood function, only few particles will be in the region of importance, thus, the number of particles has to be increased in order to correctly approximate the posterior distribution. In this SIS algorithm, as k increases, the variance of the weights ω k i also increases, and therefore, after a certain number of steps, all but one particle will have negligible normalized weights. This problem is known as degeneracy [19]. To overcome this drawback, it is mandatory to perform a resampling step when a severe degeneracy is detected. A measure of degeneracy is the effective sample size N eff , estimated as, where a small N eff indicates a severe degenerancy. Therefore, when degenerancy is detected, N samples with uniform weights are drawn from the discrete representation of the posterior, given by the previous particles and weights, yielding a variant of SIS algorithm called sampling-importance-resampling (SIR) algorithm [19,52].

Results
The goal of this section is to quantify the performance of the methods presented in the above sections, leading to the ALPA filter. In order to do that, we obtained experimental data in a real indoor scenario by using the systems described in [10] and [12], and we ran numerous Monte Carlo simulations. In the following, we compare the performance of the introduced techniques with conventional approaches as well as with the CRLB.
We use the dynamic and measurements models above described together with the following state vector and prior information, depending on whether we estimate ranges or positions, • Range estimation: we use a state vector y k = ( d k , d ' k , d '' k ) . The standard deviation σ d (3) is 1 m/s 3 , which is roughly 50% of the maximum [18]. Furthermore, we add prior information about first and second derivatives of the distance, by considering they are distributed as Gaussians N(0, σ d ') and N(0, σ d '') respectively, where σ d ' = 0.5 m/s and σ d '' = 0.5 m/s 2 .
• Position estimation: we use a state vector y k = ( x k , v k , a k ) , where x k consists of the two-dimensional coordinates of the mobile target's position, and v k and a k are the velocity and the acceleration vectors. The same previous values for the deviations of the derivatives of the coordinates are used for dynamic and prior information.
For the experimental data, the target carried a laptop equipped with an IEEE 802.11b/g adapter and the measuring systems described in [10] and [12]. The anchors consisted of IEEE 802.11b/ g access points (APs). In the RSS case, the anchors periodically sent beacon frames (at a frequency of MHz) and the RSS values were obtained based on the RSS indicator at target's adapter [12]. In the TOA case, the mobile target periodically sent request-to-send frames to each anchor (at a frequency of MHz), and a counter connected to the WLAN adapter saved the clock-cycles elapsed between the request and the reception of the corresponding clear-tosend frame [10]. For the results presented in this section, we refer as fusion the results of combining RSS and TOA data at every time-step.

Experimental results
As mentioned above, in a realistic scenario, NLOS propagation together with multipath effects constitute the major drawback of localization in harsh environments. This section illustrates the behavior of the proposed algorithm during a typical path followed by a mobile target in an indoor scenario. We carried out a measurement campaign inside an office building cluttered with clusters of objects and people moving freely in the area of the measurements. The propagation conditions were even harsher than the ones commonly find by an OBU placed within a car. Figure 8 shows the trajectory of 65 meters as well as the position of the 4 APs. It took 100 seconds to complete the whole trajectory, receiving a new set of measurements every second ( ∆ t = 1s) from all the APs. As reflected in Figure 8, NLOS was always present when measuring with respect to AP3 and AP4, and only in a small percentage of positions there was a LOS between target and anchors AP1 and AP2.
In Table 2, we compare the error achieved with the proposed ALPA range estimation method in the presented scenario to the error obtained with conventional approaches [15,24]. We specify the results for RSS-only and TOA-only cases, and for their fusion. Specifically, we call, • ML-RSS, ML-TOA, ML-Fusion: the range estimates obtained by means of the ML estimator. We utilize as likelihood function the convolution of the likelihood reported by the measurements (log-normal in the RSS case and Gaussian in the TOA case) and a Gaussian distribution corresponding to the bias. 7 The likelihood for the fusion is computed from (18).
• AML-RSS, AML-TOA, AML-Fusion: the ranges that correspond to the result of obtaining the maximum of the adaptive likelihood computed by means of Proposition 1,and (18) in the fusion case.
• EKF-RSS, KF-TOA, EKF-Fusion: the result of applying EKF and KF filters for RSS and TOA measurements, respectively, using the same bias distributions as in the ML case, and the dynamic model given by (3).
• ALPA-RSS, ALPA-TOA, ALPA-Fusion: the range estimates obtained by the ALPA filtering described in Table 1, where N = 10 000 is the number of particles used.
We summarize for all these methods the quartiles of the absolute error in range estimates as well as the root mean squared error (RMSE), which incorporates both systematic (bias) and random errors. In order to study the influence of the number of measurements, M , in the final performance, all these statistics are shown for four different values.  Analogously, in Figures 8-9 and Table 3, we summarize the results in position estimation. In this case, we call, 8 • ML-RSS, ML-TOA, ML-Fusion: the positions obtained with the ML distances and a trilateration technique based on the radical axis of the circles drawn at each anchor's position [10,[12][13].
• EKF-RSS, EKF-TOA, EKF-Fusion: the positions obtained by means of an EKF whose measurements model relates the measurements to the target's position.
• PF-RSS, PF-TOA, PF-Fusion: the result of applying the ALPA filter described in Table 1 to the positional-states, with N = 10 000 particles.       Table 3 show the better performance of the proposed ALPA filter for all the analyzed scenarios, resulting, for example, in an RMSE of 2.82 meters for the case of only using 10 RSS and 10 TOA measurements, while previous essays obtained RMSEs around 4 meters by using hundreds of measurements [12,28].

Simulation results
The CRLB provides a lower bound on the minimum achievable mean squared estimation error for any unbiased estimator. In what follows, we use such metric to assess the optimality of the presented ALPA filter against such lower bound.
The Bayesian version of the CRLB is known as the Van Tress CRLB [53], or posterior CRLB, since it is obtained from the posterior distributions of the random state vector [54]. In our case, for each time instant k, the CRLB is, where g ( Z k ) is an unbiased estimator of y k and J k is the Fisher information matrix (FIM) obtained as, Tichavský et al. proposed a recursive formula to compute the FIM [55]. For the particular case of the linear-Gaussian dynamic model in (3), being Q k the covariance matrix in this model, the FIM is given by the recursion [19], and J k +1 z = -Ε { ∇ y k +1 ∇ y k +1 log p ( z k + 1 | y k + 1 ) T (24) To start this recursion, we assume the initial density as Gaussian, then, the initial FIM coincides with its covariance matrix. Figure 10 compares the RMSE obtained in range estimation by means of the proposed ALPA-Fusion filter with the RMSE obtained by applying the EKF-Fusion method, and with the square root of the CRLB. 9 To obtain such curves, we simulated a trajectory of 85 positions and carried out 1 000 Monte Carlo experiments. Figure 10 again corroborates the remarkable performance of ALPA filter, since the corresponding curve is much closer to the CRLB than the line corresponding to the EKF error. Figure 10. The near-optimal performance of the proposed ALPA filter in harsh environments is corroborated by comparison with the CRLB.

Complexity
The key issue in PFs is the exponential growth of computational complexity as a function of the dimension of the state vector, y k , whereas EKF grows as the cube of the dimension [56].
For low dimensional problems, PF remains similar to an EKF, however, for high dimensional problems, PFs suffer from the curse of dimensionality [57]. Then, PFs that track ranges instead of positions can be advantageous from a complexity point of view.
Moreover, from Proposition 1, the complexity of the likelihood grows exponentially with the number of samples. However, this complexity can be reduced by removing redundant components from the RSS and TOA pdfs or from the resulting fusion mixture. To this aim, different criteria such as William's criterion [58], Kullback-Leibler distance [59] or clustering [60] can be utilized. Therefore, considering the improvement achieved in range and position estimation with 5 and 10 measurements, the proposed ALPA filter could be a good choice for the designing of VANETs that require low consumption. In these cases, in order to save battery, the OBUs transmit only at discrete intervals; therefore, there is more time available for processing a smaller number of samples.

Conclusions
In this chapter we have presented an adaptive likelihood function for robust data fusion in localization systems. Based on this likelihood, we have developed the ALPA filter for range and position estimation. This ALPA filter presents several advantages over conventional techniques, 1. it does not assume any parametric statistical model, utilizing the empirical distribution of the measurements at each time by means of Gaussian kernels; 2. it adaptively fuses RSS and TOA data being extensible to any other type of measurement; 3. it takes advantage of the relationship among positions in time by using Bayesian filtering; 4. it addresses the non-linear and non-Gaussian behavior of the measurements by using particle filtering.
These advantages result in a noticeable improvement with respect to other conventional techniques, as corroborated by the experimental and simulation results. Under NLOS and multipath conditions, ALPA filter obtains not only an RMSE in position estimation lower than 3 meters with only 10 RSS and 10 TOA measurements, but also an error remarkably close to the theoretical benchmark provided by the CRLB.
Therefore, ALPA filter is a valuable choice to provide localization in V2I communication systems. Its extension to cooperative localization would make this localization also possible in VANETs based on V2V communication.