Confusion matrix describing the four events that can happen in the binary test performed by the acquisition process.
Advances in electronics and telecommunications are leading to complex systems able to efficiently use the available resources. Fast electronics, complex modulation schemes and correction codes enable transmissions on channels with unfavorable characteristics, coexistence between different services in the same frequency bands and high transmission rates. However, the complexity of such communications systems often prevents analytical characterizations. For example, figures of merit such as the BER are difficult to determine analytically for transmission schemes involving correction codes and communication channels with ISI and fading. In such cases, the system is characterized through Monte Carlo simulations [1, 2]. The Monte Carlo framework involves simulations of the whole system under analysis. For example, when considering a communications system, the whole transmission-reception chain is simulated. A large number of sequences are sent through the simulated system and the message recovered by the simulated receiver is compared to the original transmitted sequence. This comparison allows one to determine the average number of transmission errors and compute the BER.
Monte Carlo simulations can be applied to almost any system although their implementation and computation requirements can be significantly high. In addition to this, precision problems can arise when the quantity to be estimated is significantly low. For example, BER of the order of require at least simulation runs. Conversely, analytical models have a limited applicability and usually adopt approximations (i.e., model linearization) that can yield a poor description of the system under analysis.
In order to overcome the limitation of Monte Carlo and analytical techniques, semi-analytic approaches have been previously implemented [2, 1]. In a semi-analytic framework, the knowledge of the system under analysis is exploited to reduce the computational load and complexity that full Monte Carlo simulations would require. In this way, the strengths of both analytical and Monte Carlo methods are effectively combined. Semi-analytic techniques are a powerful tool for the analysis of complex systems.
The characteristics and relationships among the three aforementioned methods are shown in Figure 1 : semi-analytic approaches represent a good compromise in terms of applicability and complexity, combining the strengths of Monte Carlo and analytical approaches.
The main goal of this chapter is to provide a general overview of semi-analytic techniques for the simulation of communications systems. Specific emphasis is given to their implementation in MT and two examples from the communications context are analyzed in detail.
Despite their potential, semi-analytic techniques have received limited attention from the communications community. Reference books on simulation of communications systems such as ,  and  dedicate only a few pages to this kind of techniques. The focus is usually on the computation of the BER, which represents one of the first applications of semi-analytic techniques in communication system analysis [4, 5]. In this case, the model depicted in Figure 2 is adopted. The communications channel is modeled as a non-linear device, which distorts the signal component alone, with the addition of a noise term that is assumed to be Gaussian. This model is quite general and can be used to represent several communications channels.
A classical example is the model of a transmission chain where a TWTA is used to amplify the useful signal before transmission. The TWTA is highly non-linear and can lead to signal distortions. Since the signal is injected into the TWTA before transmission, the noise component entering the amplifier is negligible. In this case, the model depicted in Figure 2 is appropriate for describing the transmission chain including a non-linear amplifier.
The TWTA is a memory-less device and can be characterized using AM-AM conversion and AM-PM conversion (AM = Amplitude Modulation, PM = Phase Modulation) curves . When a base-band signal model is used, the amplifier input and output are complex quantities; moreover, the response of the device usually depends only on the amplitude (instantaneous power) of the input signal. AM-AM and AM-PM conversion curves define the relationship between the input/output signal amplitudes and phases as a function of the input amplitude. Using these conversion curves, it is possible to simulate the behavior of the TWTA and other non-linear devices.
In the semi-analytic framework, the additivity of the noise component is exploited to compute the BER. More specifically, only the signal transmission chain is simulated and for each possible signal symbol, the Eb is computed. Since the noise properties are known, the BER for the th symbol, , is given by
where has been obtained by simulating the transmission chain (including the non-linear device) in the absence of noise and transmitting the symbol. The parameter is the noise power spectral density and it is a known value of the system. Finally, the BER of the system is obtained from
where is the probability that the symbol will be transmitted and is the number of symbols of the signal constellation.
This simple example clearly illustrates the principles of semi-analytic techniques: the analytical knowledge of the system is exploited to reduce the computation load and complexity that full Monte Carlo simulations would require. In this case, only the transmission of the signal component is simulated.
In the literature, several generalizations of the aforementioned BER computation technique have been proposed. For example,  considered the case where the noise term at the input of the non-linear amplifier is not negligible. An equivalent model is proposed where the noise at the input is propagated after the non-linearity.  also considered the presence of a bandpass channel. More recently,  proposed a methodology for estimating the BER in the presence of ISI. All these examples show the potential and flexibility of the semi-analytic approach.
1.1. Building blocks
When considering the previous example it is possible to identify three building blocks that play different roles in the semi-analytic framework:
The simulation block corresponds to that part of the system that is actually simulated. In the previous example, this block corresponds to the signal generation, the non-linear amplifier and the correlation receiver simulation. These blocks were used to determine the decision variable employed for recovering the transmitted symbol. The analytical model exploits the properties of the system to determine the quantities of interest. In the previous example, the fact that the noise introduced by the communication channel is Gaussian was exploited to determine the BER as a function of the Eb of each transmitted symbol. The estimation block is used as the interface between the simulated and analytical parts of the system. In BER computation case, the simulation part allows one to generate the different decision variables, whereas the analytical model is expressed as a function of energy per bit. The estimation block is used to determine Eb from the simulated decision variables.
The three functional blocks can be connected according to different configurations leading to different types of semi-analytic approaches. In the next section, two of these configurations are briefly discussed. Examples for each type of semi-analytic system are given in Section 2 and Section 3.
1.2. Main configurations
When considering the BER example, it is possible to note that the simulation, estimation and analytical blocks are connected in series. The simulation block is used at first to compute the different decision variables. The estimation block determines the Eb associated with each variable and finally the analytical model is used to compute the BER from the EN0.
This type of configuration is defined here as sequential since there is no feedback between the different blocks and each element of the chain is run sequentially. The principle of this configuration is shown in the upper part of Figure 3.
A second type of configuration has been recently considered for the analysis of tracking loops in DSSS and GNSS receivers. The most computationally demanding operation in a DSSS/GNSS receiver is despreading, i.e, the correlation of the incoming samples with local replicas of the code and carrier. This operation is performed by the ID blocks that rely on simple operations that can be analytically modeled. For this reason, semi-analytic models exploiting the knowledge of the ID blocks and simulating only the non-linear parts of the system have been developed [7, 8, 9, 10]. This resulted in efficient analysis tools, which require reduced processing time with the applicability of Monte Carlo simulations.
Different techniques for modeling the output of the ID have been suggested.  modeled the correlator outputs evaluated by the ID blocks as linear combinations of independent Gaussian random variables. Correlation among the different correlators was obtained by using, for the generation of different ID outputs, a subset of the same random variables. This technique becomes complex as the number of correlators increases. Another attempt made in  assumed that the correlator outputs were independent which, in general, is not a realistic condition. Finally, [9, 10] suggested the use of a technique based on the Cholesky decomposition detailed in . This approach allows one to easily generate an arbitrary number of correlated Gaussian random variables. In this way, [9, 10] were able to simulate advanced tracking loops for new GNSS signals.
Regardless of the type of approach used for modeling the correlator outputs, the aforementioned semi-analytic configuration can be represented as in the bottom part of Figure 3. In this case, an analytical model is used to generate quantities that will be propagated by simulation. In the tracking loop case, an analytical model is used to generate the correlator outputs that are then processed through simulations. The non-linear parts of the system are fully simulated and quantities such as the loop discriminator and filter outputs are computed. Finally, an estimation block is used to interface the simulation and analytical components of the semi-analytic scheme. A new estimate of the signal parameters is obtained and used to generate new correlator outputs. The estimation block is also used to compute performance metrics such as tracking jitter or MTLL .
The three functional blocks described in Section 1.1 are connected in a loop and thus this type of configuration is named closed-loop approach. The technique developed by [9, 10] and the closed-loop approach will be detailed in Section 3.
2. Sequential approach: The inter-system interference case
As shown in Figure 3, the sequential approach, at first, requires an initial simulation block to generate the random processes and system functions that cannot be analytically described. Subsequently, the simulated processes are employed by the estimation unit to obtain key parameters required by the analytical model to compute metrics of interest. In the analytical model, the estimated parameters are plugged into mathematical expressions to obtain the desired final metrics. In the sequential architecture, the gain in computational load mainly depends on the simplifications allowed by the analytical model. The use of such a model allows one to simulate only a part of the system and eventually avoid computationally demanding error counting processes.
The computational complexity of Monte Carlo simulations increases significantly when two or more communication systems coexist within the same environment. In this case, it is necessary to account for the interaction of the different systems and determine potential inter-system interference. In addition, computational requirements of Monte Carlo simulations increase dramatically with the number of random elements included in each block of the communication chain. These requirements can be considerably reduced by adopting semi-analytic techniques in which the evaluated metrics are analytically expressed as a function of parameters estimated through simulations. This principle is the core idea behind the sequential semi-analytic approach.
In order to better illustrate the principles of the semi-analytic sequential approach, an inter-system interference scenario is considered in this section. The case of a satellite navigation receiver affected by interference generated by a communications system is considered. The primary system (i.e., the victim system) considered here is a GPS L1 receiver affected by an interference signal caused by third order harmonics of a DVB-T signal. A comprehensive description of the sequential approach applied to an inter-system interference case is provided in the following.
The reception of GNSS signals is challenging due to low signal power, possible severe channel conditions and the presence of RF interference. The presence of RF interference can be particularly troublesome and the performance of a GNSS receiver can vary significantly depending on the type of interference. For this reason, significant research efforts have been devoted to the characterization of the receiver performance in the presence of different types of interference [13, 14]. Furthermore, the impact of interference originated by specific communication technologies, such as UWB transmissions , DME signals  and DVB-T harmonics [17, 18], has been thoroughly investigated. It is noted that the interference impact strongly depends on the strategy adopted by a GNSS receiver for processing the useful signals. Moreover, different impacts are expected depending on the receiver operating mode. The first task of a GNSS receiver is to determine the presence of a specific GNSS signal. This task is accomplished by the acquisition block that implements a statistical test for the detection of useful signals. After acquisition, the useful GNSS signals are passed to the tracking stage that refines the estimates of different signal parameters. Since acquisition and tracking implement different processing strategies, RF interference will affect these two receiver blocks differently. In the following, the acquisition stage is considered. A semi-analytic approach for the analysis of GNSS tracking loops is discussed in Section 3.
The acquisition of a GNSS signal can be formulated as a classical detection problem , where the signal of interest is buried in noise. The outcome of the acquisition process is twofold. First, a decision relative to the signal presence is provided. If the signal is present, a rough estimate of signal parameters (defined in the following) is also obtained. The received useful GNSS signal, which is impaired by AWGN and interference, is processed by the acquisition block yielding a decision variable. If the decision variable is greater than a decision threshold the signal presence is declared. This decision variable is calculated by using the digital samples provided by the receiver front-end. The signal model and the acquisition process are briefly summarized in the following sections.
2.1. The GNSS signal
The signal at the input of a GNSS receiver, in a one-path AWGN channel and in the presence of RF interference, can be modeled as
where is the signal transmitted by the th GNSS L1 satellite, is the total number of satellites in view, is the received interference signal and is the noise term.
Each useful signal, , can be expressed as
It is noted that GNSS signals adopt a DSSS modulation. The pseudo-random sequences, , allow the simultaneous transmission of several signals at the same time and in the same band. Moreover, sequences are characterized by sharp correlation functions that allow the precise measurement of the signal travel time. The travel time is then converted into distances that allows a GNSS receiver to determine its position.
The pseudo-random sequence, , is composed of several terms including a primary spreading sequence and a subcarrier:
The signal in (5) represents the subcarrier of duration, which determines the spectral characteristics of the transmitted GNSS signal. The GPS L1 CA component is BPSK modulated, whereas the Galileo E1 signal adopts a CBOC scheme. The sequence, of length, defines the primary spreading code of the th GNSS signal. In the following, only the BPSK case is considered. The results can be easily extended to different subcarriers.
A GNSS receiver is able to process the useful signals independently since the spreading codes are quasi-orthogonal. Therefore, expression (3) can be simplified to
where the index has been dropped for ease of notation.
The received signal in (6) is filtered and down-converted by the receiver front-end. Filtering is of particular importance since it determines which portion of the interfering signal, , will effectively enter the receiver. After down-conversion and filtering, the input signal is sampled and quantized. In this analysis, the impact of quantization and sampling is neglected. After these operations, (6) becomes:
where the notation is used to denote discrete time sequences sampled at the frequency. In addition, the index “BB” is used to denote a filtered signal down-converted to baseband. Furthermore, the signal in (7) can be written as
The noise term, , is AWGN with variance. This variance depends on the filtering, down-conversion and sampling strategy applied by the receiver front-end and can be expressed as, where is the front-end bandwidth and is the PSD of the input noise. The ratio between the carrier power, , and the noise PSD, , defines the CN, one of the main signal quality indicators used in GNSS.
2.2. The DVB-T interfering signal
The interference term in (6), , originates from DVB-T emissions. The DVB-T system is the European standard for the broadcasting of digital terrestrial television signals and has been adopted in many countries, mainly in Europe, Asia and Australia. The standard employs an OFDM-based modulation scheme operating in the VHF III (MHz), UHF IV (MHz) and UHF V (MHz) bands . It is noticeable that none of these bands fall within the bands allocated for GNSS signals. However, the second harmonics of UHF IV and third order harmonics of UHF V could coincide with the GPS L1 band, and, thus cause harmful interference. The case of the third order harmonics of a DVB-T signal is considered here. The DVB-T transmitted signal can be represented as
where is the modulation order, is the subcarrier index, is the symbol index, represents the total number of transmitted symbols and models the th constellation point of the th symbol. Here, the term “subcarrier” should not be confused with the subcarrier used to modulate the GNSS signals in (5). In the OFDM context, several components are transmitted in parallel on different overlapping frequency bands. The term subcarrier denotes each individual transmitted component. In GNSS, the subcarrier is an additional component that modulates the transmitted signal and plays a role analogous to the carrier used for the signal up-conversion.
Third order harmonics are the consequence of the malfunctioning of the transmitter electronics. In particular, the presence of these harmonics are due to the non-linearities of an amplifier. The output of an amplifier can be modeled using a polynomial expansion of the amplifier input/output function:
where are the polynomial coefficients of the Taylor series expansion of the amplifier input/output function. This type of model is an alternative to the AM-AM and AM-PM conversion functions discussed in Section 1 for the TWTA case.
The terms of order in (10) model the amplifier non-linearities and the ratios are expected to be small for. Since only the third harmonics will fall into the GPS L1 band, the interference signal at the antenna of a GNSS receiver is given by
The signal is filtered and down-converted by the receiver front-end and signal, the filtered version of, will affect receiver operations.
Finally, the interference term in (7) can be modeled as.
2.3. The acquisition process
After signal conditioning, the sequence is correlated with local replicas of the useful signal code and carrier as shown in Figure 4. Since the code delay, , and the Doppler frequency, , of the useful signal in (8) are unknown to the receiver, several delays and frequencies are tested by the acquisition block. In addition to this, several correlators, computed using subsequent portions of the input signal, can be computed in order to produce a decision variable less affected by noise and interference. In this way, the output of the th complex correlator can be expressed as
where, and are the code delay, Doppler frequency and carrier phase tested by the receiver. The parameter is the number of samples used for computing a single correlation output and defines the coherent integration time. It is noted that the computation of correlation outputs is essential for the proper functioning of a GNSS receiver and they are both used in acquisition and tracking modes . To further improve the acquisition performance, non-coherent integration can be implemented as illustrated in Figure 4. More specifically, the impact of the navigation message, , is removed through squaring, , and the final decision variable is computed as
where is the total number of correlation samples that are non-coherently integrated. It should be noted that for only coherent integration is used. In order to determine the signal presence, the receiver compares with a decision threshold,. If is greater than then the useful signal is declared present.
It is noted that, as in any binary test, two hypotheses are possible:
The null hypothesis, assumes that the correlator outputs, , are made of noise alone. Since the pseudo-random sequences, , are selected to have good autocorrelation properties, if the code delay and Doppler frequencies tested by the receiver do not match the parameters of the input signal, , then the useful signal component is almost completely filtered out at the correlator output. Thus, also in this case, the hypothesis is verified.
Furthermore, is the alternative hypothesis and assumes that the signal is present and the local code and carrier replica are perfectly aligned. If is declared, then rough estimates of and are also obtained.
Depending on the result of the test, , two decisions can be taken by the receiver
and the four events described in Table 1 can occur.
|Signal absence correctly declared||False alarm|
|Missed detection||Signal detection|
The probabilities of missed detection and correct signal absence decision are obtained as and, respectively. The performance of the acquisition process is characterized in terms of ROC , which plots the detection probability as a function of the false alarm rate. ROC curves capture the behavior of a detector as a function of the different decision thresholds.
2.4. The semi-analytic approach
The goal of the sequential semi-analytic approach considered in this section is the evaluation of the ROC in the presence of DVB-T interference. The full Monte Carlo approach would consist of simulating the full transmission/reception scheme shown in Figure 5 and generating several realizations of both under and. Probabilities of detection and false alarm would then be determined through error counting techniques. This approach is computationally demanding and does not exploit the analytical knowledge of the system. More specifically, under the hypothesis that the correlator outputs are iid complex Gaussian random variables with independent real and imaginary parts, it is possible to show  that
where is the generalized Marcum Q-function of order. The function is the modified Bessel function of first kind and order. In (16) and (17), is the variance of the real and imaginary part of. The parameter is given by
and defines the SNR at the correlator outputs.
The correlator output can be considered iid complex Gaussian random variables even in the presence of DVB-T interference. More specifically, the large number of terms in the sum performed in (12) allows one to invoke the central limit theorem and assume is Gaussian. Independence derives from the down-sampling performed by the correlators. Since only one correlator is produced every samples, the statistical correlation between subsequent correlators is significantly reduced. The lack of correlation translates into independence for Gaussian random variables. Thus, models (16) and (17) can be used and the only parameters that need to be estimated are and.
The analytical knowledge of the system can be further exploited to simplify the evaluation of and. In particular, since and are modeled as zero mean random processes, they only contribute to the variance of the correlator outputs. Thus, neglecting residual errors due to delay and frequency partial misalignments, yields
where is the useful signal power and is one of the known parameters of the system. Thus, can be derived from and.
Finally, exploiting the linearity of the correlation process, it is possible to express as
which is a linear combination of a useful signal term, derived from, a noise term, derived from, and an interference term derived from. The variance can be obtained as
The function in (24) is the normalized PSD of the DVB-T interference signal after front-end filtering. In addition, is normalized such that
The function models the effect of the correlation on the interfering signal. Correlation can be modeled as an additional filtering stage and can be shown to be well approximated by the PSD of the subcarrier used in the despreading process. Also, is normalized to have a unit integral. It is noted that different subcarriers lead to different and thus, will have different effects depending on the type of modulation considered.
The only unknown parameter in the previous equation is the SSC, which needs to be estimated using Monte Carlo simulations. Also, the interfering DVB-T signal is fully simulated. The resulting signal is filtered by the receiver front-end and the sequence is obtained. The samples of are used to estimate the normalized PSD,. This can be easily obtained using the MT functions developed for spectral analysis. In this case, the pwelch function is used. The function is used to compute the SSC, which is then used to determine the system ROC.
The developed semi-analytic approach is shown in Figure 6 where the simulation, estimation and analytic components are clearly highlighted.
2.5. Performance comparison
A comparison between a full Monte Carlo simulation and a semi-analytic technique, implemented for the evaluation of the acquisition performance of a GPS L1 receiver impaired by third order harmonics of a DVB-T signal, is presented in this section. Initially, the DVB-T interfering signal in time domain is programmed in MT by following the DVB-T standard and the non-linear amplifier model, as illustrated in Figure 5. Note that the simulation of the interfering signal is required for both Monte Carlo and semi-analytic techniques. Subsequently, the estimated PSD of the interfering signal, needed for the estimation of the SSC in the semi-analytic method, is obtained by applying the pwelch function of MT. A realization of the normalized PSD of the interfering signal is depicted in Figure 7. The centre frequency of the interfering signal is set to, where is the frequency shift of the interference signal with respect to the centre frequency of the GPS L1 signal. The impact of selecting different values of on the acquisition performance of a GPS L1 receiver is analyzed in . Furthermore, the frequency response of the GPS L1 front-end filter is also plotted in Figure 7. In this case, the selected filter bandwidth is MHz.
Sample results comparing ROC curves obtained using semi-analytic and Monte Carlo simulations are shown in Figure 8. The parameters used for the analysis are reported in Table 2. From Figure 8, it can be observed that the Monte Carlo and semi-analytic approaches provide similar results and the curves obtained using the two methods overlap. However, the semi-analytic approach provides increased precision, particularly when small values need to be estimated, and a significant reduction in terms of computational complexity. Full Monte Carlo simulations require the implementation of the full transmission/reception chain and the evaluation of the ROC with a computational complexity significantly higher than that of the semi-analytic approach described above.
|Coherent integration time,||ms|
|Interference to signal power ratio,||dB|
|Centre frequency difference,||Hz|
|Number of Monte Carlo Simulation runs|
Additional results relative to the impact of DVB-T interference on GNSS can be found in .
3. Closed-loop approach: Digital tracking loops
As anticipated in Section 1, a second configuration, called closed-loop approach, has been recently proposed for the simulation of digital tracking loops in DSSS/GNSS receivers. The SATLSim toolbox is a set of MT functions implementing the semi-analytic closed-loop approach for the analysis of digital tracking loops. The SATLSim toolbox has been developed by [9, 10] and can be downloaded from the following websites:
In the following, the closed-loop approach for the simulation of digital tracking loops is considered and the MT code developed in the SATLSim toolbox is briefly analyzed.
A description of the correlator model used for reducing the computational complexity of the system is at first provided. The samples given by (7) at the input of a GNSS receiver are processed by the different functional blocks with different objectives. The acquisition process described in Section 2 is the first stage of a GNSS receiver and has the goal of determining the signal presence and provide a rough estimate of its parameters. These parameters include the code delay and Doppler frequency.
If the signal is successfully acquired then different tracking loops are used to refine the estimate of the signal parameters. A DLL is usually used to provide accurate estimates of the code delay, , and track delay variations due to the relative motion between receiver and satellite. The Doppler frequency, , is recovered using either a FLL or a PLL. If a PLL is used then the carrier phase, , is also estimated. The code delay and carrier phase allow the receiver to determine its position whereas the Doppler frequency can be used for computing the user velocity.
As indicated in Section 2, a subcarrier can be used for shaping the spectrum of the transmitted GNSS signal and improving its robustness against multipath. The presence of a subcarrier makes code tracking more complex since the correlation function of the transmitted signal may have multiple peaks. More specifically, fine delay estimation is obtained by maximizing the correlation between input signal and local code: the correlation function is maximized only when the delay of the locally generated code matches the delay of the input signal. The presence of several peaks in the correlation function may cause the DLL to converge to a local maximum causing biases in the delay estimation. For this reason, several solutions have been proposed to avoid lock on secondary correlation peaks [26, 27]. An effective solution is represented by the SLL proposed by . In this case, the subcarrier is seen as a periodic waveform that further modulates the transmitted signal. The delays of code and subcarrier are decoupled and estimated separately. In this way, the ambiguous one-dimensional signal correlation is projected in an unambiguous bi-dimensional function. In the following, the joint simulation of DLL and SLL is considered.
In a GNSS tracking loop, the incoming signal is correlated with several locally generated code and carrier replicas and different correlator outputs are produced. This process is analogous to the correlation operations described in Section 2 and is performed by the ID blocks.
Each correlator output is a function of the input signal and the parameters previously estimated by the tracking loop. The correlator outputs are passed to the non-linear discriminator that produces a first estimate of the tracking error that the loop is trying to minimize. The tracking error is filtered and passed to the NCO that is used for generating new local signal replicas.
Efficient tracking loop simulations can be obtained by substituting the ID blocks with their analytical model. More specifically, a correlator output can be modeled as:
From (26), it is possible to reconstruct the correlator outputs given the estimation errors generated by the tracking loops. Thus, the correlation process does not need to be simulated and only the estimation errors are determined using a Monte Carlo approach. Based on this principle, the simulation scheme shown in Figure 9 can be adopted for the fast simulation of digital tracking loops.
The functional elements in Figure 9 have been grouped to form the simulation block, the analytical model and the estimation part. The analytical model is used to convert the signal parameter errors, , , and, into the signal components of the correlator outputs. At the same time, the analytical model is used to determine the variance and correlation of the different noise terms used to simulate. Since the noise components are simulated using parameters determined by the analytical model, the “noise generation” block is shared between the analytical and simulation parts. The remaining parts of the loop, including the non-linear discriminator, loop filter and NCO, are fully simulated. Finally, the estimation block determines the residual signal parameter errors by comparing true values (determined by the simulation scenario) and estimates produced by the NCO.
By modifying these functional blocks, it is possible to simulate different tracking loops. In the simulation scheme implemented in the SATLSim toolbox, a new estimate of the tracking parameters (Doppler frequency, carrier phase and code and subcarrier delays) is generated by an NCO model. This model accounts for the integration process performed by a real NCO and different update equations can be used . A commonly used model is the rate-only feedback NCO , characterized by the following update equation:
where denotes the -th estimate of the tracking parameter under consideration and is its estimated rate of change. The rate is generally provided by the loop filter. It is noted that when several parameters are considered, equation (27) is used to update each term independently. The new parameter estimate is compared to the true value and a new estimation error is computed. This error is then used for the generation of the signal component at the output of the ID block using equation (26). The noise term, generated separately, is then added to the signal component. When several correlators are required, the correlation among the different noise components has to be accounted for. This is simulated using the approach described in .
The operations required to convert the correlator outputs into a new estimate of the parameter rate, , are fully simulated and correspond to the functional blocks that can be found in a real tracking loop. For instance, the correlator outputs are used to update the nonlinear discriminator and the loop filter. It is noted that a similar simulation scheme can be used for analyzing Kalman filter based tracking. In this case, the correlator outputs are fed to a Kalman filter that is used to produce new estimates of the tracking parameters.
3.1. Code structure
The structure of the code developed in the SATLSim toolbox is provided in Figure 10. In this case, the code is used to estimate the tracking jitter of the loop as a function of different parameters, such as the Early-minus-Late spacing and the input CN. In particular, the non-linear discriminator may use several correlators to compute the cost function that the loop is trying to minimize. A DLL usually requires at least two correlators, named Early and Late correlators, computed for the delays
where is the best code delay estimate and is the Early-minus-Late spacing. Early and Late correlators are computed symmetrically with respect to the best delay estimate and the non-linear discriminator computes a cost function proportional to the misalignment between these two correlators. Since the code correlation function is symmetric, the output of the discriminator is minimized when corresponds to the delay of the input signal. The SLL works using similar principles. The performance of DLL and SLL depends on the Early-minus-Late spacing that is a simulation parameter. The CN is used to determine the correlator amplitude and the variance of the noise component,.
The parameters required for initializing the simulation procedure are accessible through the function InitSettings. These parameters include the sampling frequency, the loop bandwidth and the coherent integration time that are used to design the loop filters through the function FilterDesign. In the code provided, standard formulae from  are used. However, FilterDesign can be modified in order to adopt a different approach, such as the controlled-root formulation proposed by . During the initialization phase, the true input parameters are also generated. The simulation core consists of three nested loops, on the Early-minus-Late spacing, for different CN values and for the number of simulation runs. The loop on Early-minus-Late spacing can be absent if, for example, only a PLL is considered. For each Early-minus-Late spacing and for a fixed CN, a noise vector containing the noise components of the correlator outputs is generated. The vector length is equal to the number of simulation runs and all the noise components are generated at once for efficiency reasons.
All intermediate results, such as the discriminator and loop filter outputs, are stored in auxiliary vectors and are used at the end of the loop on the simulation runs to evaluate quantities of interest such as the tracking jitter.
In the code provided, theoretical formulae for the computation of the tracking jitter are also implemented and used as a comparison term for the simulation results.
3.2. Standard PLL (PLL.m)
The simulation of a standard PLL requires the generation of the Prompt correlator alone (GenerateSignalCorrelation). The Prompt correlator is the output of the ID block computed with respect to the best delay estimate provided by the loop . For this reason, the noise generation (GenerateNoiseVector) simply consists of simulating a one dimensional complex Gaussian white sequence with independent and identically distributed real and imaginary parts with variance 
When simulating a standard PLL alone, perfect code synchronization is assumed and (26) simplifies to
where is obtained by comparing the true Doppler frequency against the loop filter output. is the phase error obtained as the difference between the true phase and the phase estimate produced by the NCO.
In SATLSim, the function UpdateDiscriminator implements a standard Costas discriminator. Different phase discriminators, as indicated in , can be easily implemented by changing this function.
3.3. Double estimator (DoubleEstimator.m)
In the DE case, i.e. when DLL and SLL are jointly used, the function GenerateNoiseVector, responsible for the generation of the correlator noise, produces a matrix, where is the number of simulation runs. The five rows of this matrix correspond to the five correlators required by the DE that are characterized by the following correlation matrix
where is the subcarrier Early-minus-Late spacing.
The NCO update (UpdateNCO) is performed on both code and subcarrier loops and the estimated errors, and, are used to compute new correlator signal components (GenerateSignalCorrelation). Two nonlinear discriminators (UpdateDiscriminator) and loop filters (UpdateFilter) are run in parallel to determine the rate of change of both code and subcarrier delay.
The DE provides an example of how several tracking loops, operating in parallel, can be easily coupled in order to provide more realistic simulations accounting for the interaction of different tracking algorithms .
3.4. Sample results
In this section, sample results obtained using the SATLSim toolbox are shown for the DE case. Results for the analysis of the PLL can be found in . Specific focus is devoted to the analysis of the tracking jitter, which is one of the main metrics used for the analysis of digital tracking loops. The tracking jitter quantifies the amount of noise transferred by the tracking loop to the final parameter estimate . The tracking jitter is the standard deviation of the final parameter estimate normalized by the discriminator gain. The non-linear discriminator is usually a memoryless device characterized by an input/output function relating the parameter estimation error to the discriminator output. The discriminator gain is the slope of this function in the neighborhood of zero (hypothesis of small estimation error).
Tracking jitter of the DLL alone
Tracking jitter of the SLL alone
Jitter of the combined delay estimate.
This is due to the fact that the DE jointly uses a DLL, for estimating the code delay, and a SLL, for determining the subcarrier delay. Subcarrier and code delay are then combined to obtain the final estimate of the travel time of the transmitted signal . Thus, three different jitters are evaluated for the different estimates produced by the system. Tracking jitter has been expressed in meters by multiplying the standard deviation of the delay estimates by the speed of light.
In addition to this, the curves are shown in Figure 11 a) and Figure 11 b). More specifically, three different methodologies have been employed for determining the tracking jitter. The theoretical curve corresponds to approximate formulas obtained by linearizing the input/output function of the non-linear discriminator. These formulas are valid only for small tracking errors or equivalently for high CN. The jitter obtained from the actual error has been obtained by evaluating the variance of the code phase error. It is noted that in a real tracking loop the code phase error is not directly accessible since the true code phase is unknown. Thus, the tracking error can be evaluated by measuring the error at the loop filter output, which is an observable point, and propagating its variance through the loop. The tracking jitter obtained by propagating the variance at this measurable point corresponds to the curve denoted by “Estimated from the loop filter output”. The relationship between the variances of the discriminator output and the true tracking error can easily be evaluated when the loop is working in its linear region. The measured curve was introduced to further validate the theoretical model and test the correctness of the simulation methodology. This latest curve is not available for the combined delay estimate.
|DLL Early-minus-Late spacing||s (0.2 chips)|
|SLL Early-minus-Late spacing||s (0.2 chips)|
|DLL Loop Order|
|SLL Loop Order|
|DLL Loop Bandwidth||Hz|
|SLL Loop Bandwidth||Hz|
|Modulation type||BOC(1, 1)|
From the results shown in Figure 11, it is observed that the developed semi-analytic technique is able to effectively capture the behavior of the system. For high CN values, a good agreement between theoretical and simulation results is found. However, for CN lower than dB-Hz theoretical and simulation results start diverging. This is more clear in parts a) and c) of the figures. For such low CN values, the loop is no longer working in the linear region of the discriminator input/output function. Thus, the theoretical model is unable to capture the behavior of the loop that is losing lock. The semi-analytic technique implemented in the SATLSim MT toolbox is able to effectively describe the non-linear behavior of the loop requiring only limited computation resources.
In this chapter, the development of fast semi-analytic techniques using MT has been analyzed. In the semi-analytic framework, the knowledge of the system under analysis is exploited to reduce the computational load and complexity that full Monte Carlo simulations would require. In this way, the strengths of both analytical and Monte Carlo methods are effectively combined.
Two examples of semi-analytic techniques have been thoroughly analyzed and used to illustrate the two main configurations developed within the semi-analytic framework. The first example illustrates the sequential configuration where simulations and the analytical model are used sequentially. This type of configuration provides increased precision with respect to full Monte Carlo simulations, particularly when the quantities to be estimated assume small values. In addition to this, a significant reduction in terms of computational complexity is achieved. In the example considered, full Monte Carlo simulations require the implementation of a full transmission/reception chain including the interaction between two different systems, DVB-T and GNSS. This requirement led to a significant computational and development complexity. The considered semi-analytic approach is an effective solution for alleviating those requirements.
The second example considered the closed-loop approach and specific focus was devoted to the SATLSim MT toolbox. This toolbox has been developed for the analysis of digital tracking loops and fully exploits the flexibility of the MT programming language. The code has been organized in functions that can be easily replaced by different MT modules. In this way, different loop components such as discriminators, loop filters and NCO models can be integrated in the SATLSim toolbox. The efficiency of semi-analytic techniques and the reduced development time enabled by the MT language are an effective tool for the analysis of complex communications systems.