Genetic Algorithms in Direction Finding

Passive receiving systems are used to intercept emissions of interest, both communication and Radar, and to measure their characteristic parameters in order to classify and possibly identify them. Direction of Arrival (DOA) is one of the most important parameters to be measured, as it can yield a localization fix by means of triangulation (if more receivers are dislocated on the area), or however it can help designate the target for further operations (Neri, 2006).


Introduction
Passive receiving systems are used to intercept emissions of interest, both communication and Radar, and to measure their characteristic parameters in order to classify and possibly identify them. Direction of Arrival (DOA) is one of the most important parameters to be measured, as it can yield a localization fix by means of triangulation (if more receivers are dislocated on the area), or however it can help designate the target for further operations (Neri, 2006).
There are several ways to estimate the DOA: by measuring signal amplitude received by a rotating directional antenna, or the amplitude difference, phase difference and time difference of arrival between two or more antennas (Wiley, 1985). A more general approach is based on the Array Processing techniques, as described in (Friedlander, 2009), considering the complex signals received by the elements of an array, thus taking into account both amplitude and phase or time, and performing an estimation process.
Rotating antenna DOA can give a good accuracy, in the order of a fraction of its beamwidth, but it works for a continuous emitter or a high rate pulse emitter in order to estimate DOA through the analysis of amplitude shape modulated by the beam pattern on a pulse train, and in order to have a reasonable probability of intercept.
Amplitude monopulse DOA is usually simple though not very performing due to amplitude measurement errors (e.g. antennas ripple, multipath, unbalances).
Time difference of arrival DOA can be quite simple and accurate but it needs a large baseline between the two antennas to have good performance.

Phase goniometry
Here we focus the attention to phase goniometry, which is often used in Communicationband intercept receivers because of the difficulty of having directional antennas at these frequencies; by the way, generalization to other Array Processing techniques is straightforward.
The basic principle of phase goniometry is the simple interferometer depicted in Fig. 1; the phase difference between the two antennas is related to the angle φ. Here the angle φ is measured counter-clockwise starting from the x-axis as in trigonometry, while DOA is defined, as usual in operative systems, as the clockwise angle starting from a given reference: e.g. North or Platform Heading, giving absolute and relative DOA respectively. The two antennas are separated by the baseline L thus the path difference between a distant emitter 1 and the two antennas is given by (1) and the phase difference Δψ, is obtained multiplying the path difference by the propagation vector k = 2π/λ, where λ is the signal wavelength: If L < λ/2, phase difference is never ambiguous for every incident angle, while on the contrary more baselines are needed to solve the ambiguity. A short baseline provides a not ambiguous angular estimate and a long baseline gives a more accurate measurements around the former. The ratio between the baselines is limited by the phase measurement error.
A general solution is represented by the phased array, as described in Figure 2.
If a regular disposition is used, i.e. all element distances are equal (L) the array n th line becomes: In this case the ambiguity is related to the distance between the elements L, while accuracy is related to the total array length (i.e. the number of elements). Linear arrays present their best performance at the broadside direction, while at endfire the beam is wider and DOA accuracy is lower.
To have a good coverage of the whole azimuth a circular array is usually used, which are described below, along with the principles of several DOA estimation algorithms.

Uniform circular arrays
A uniform circular array is a smart solution to have a good direction finding performance for every angle, while linear arrays suffer from beam broadening when scanning; moreover less coupling between the element is expected with this kind of arrays (Tan et al., 2002). They are composed of several omnidirectional elements (e.g. dipoles) equally spaced on a circle, (cfr. figure 3).
where r is the array radius and α n is the n th element azimuth, α n = 2π (n-1)/N.
When the number of element is odd, and at least five, ideally no ambiguity arises also when wavelength is smaller than the circle radius. Practically noise and non idealities limit frequency to some bandwidth, but these kind of antennas have usually good performance (Lim et al, 2004, Tan et al., 2002, Miller et al., 1985.

DOA estimation algorithms
The phase difference between the array elements are related to the azimuth and elevation. The estimation of these angles can be done in several ways, which can be grouped into three conceptual classes: • algorithms that minimize a cost function, like the Beamforming method (Van Veen & Buckley, 1988), the Maximum Likelihood method (Satish & Kashyap, 1996), and many others, like Minimum Variance, Capon variation; • algorithms based on multiple signal separation like MUSIC (Schmidt and Franks, 1986), ESPRIT (Roy and Kailath, 1989) and others; • algorithms exploiting calibration information, like the correlative method and some variations of MUSIC.
A complete review of the DOA estimation method can be found in the paper (Godara, 1997) and in its huge reference list.
The Beamforming method takes the name from the ability to steer the main lobe of an array by feeding its antenna elements with a given phase pattern such that their contributions line up in phase in the wanted direction. Conversely, as the antennas are reciprocal objects, if the measured array factor is combined in phase with the theoretical array factor (5), a maximum θ φ x z www.intechopen.com will appear in correspondence with the true values θ and φ. The way to combine in phase the measured and theoretical array factors is the product by the Hermitian conjugate, thus the angular estimation may be found by maximizing the function: The Maximum Likelihood approach considers the probability density function of the observation vector given the unknown parameters, its peak will give their best estimation: If the measurement joint PDF is the multivariate Gaussian: the Maximum Likelihood can be obtained minimizing the exponent: where the measurement covariance matrix has been defined: The MUSIC method and its variations first estimates the noise subspace through eigenvalues analysis of the measured array correlation matrix, and then in the orthogonal subspace M peaks can be searched of the function where E N is composed of the noise column eigenvectors (Schmidth, 1986).
MUSIC method can be also used in conjunction with the Mutual Coupling Coefficient estimation. Mutual coupling affects the phase patterns of the array elements causing DOA errors; the intrinsic symmetry of a uniform circular array makes it easy to set up a model of non ideal phase pattern due to mutual coupling, which acts as a circularly symmetric Toeplitz matrix whose coefficient can be estimated together with DOA (Qi et al., 2005;Weiss & Friedlander, 1992).
The most straightforward way to deal with antennas non idealities is to set up a calibration and to compare measurement with calibrated data to estimate an accurate DOA (Smith et al., 2005). Of course this method has the drawback of the expensive calibration phase that has to be performed in a proper test range, and the memory requirement to store the calibrated data. The peak of the correlation function gives the estimated φ and θ www.intechopen.com where Δψ are the phase differences and the superscripts indicate the measured and the calibrated data.
The described methods for DOA estimation can be all considered as optimization problems, as there is always a function to be minimized or maximized; Genetic Algorithms can be applied easily to them.

Genetic Algorithms
The great adaptability of living gave the first hints for an exploitation of this characteristic by computer machine. The pioneer of this approach is John Holland around the 70s: though previous works tried to simulate the evolution, he was the first to use evolution as an optimization tool, and invented the term Genetic Algorithm.
Living beings evolve through Natural Selection: only those who are strong enough to survive till the reproductive age and that win the struggle to mate can propagate their genetic heritage. In other words those who have a high Fitness can proliferate and their offspring have a high probability of inheriting good characters after the partial mixing (Crossover) of the sexed reproduction.
A random Mutation can occur which causes sad effects in our species, but which has the important task of avoiding the characters stagnation in the population, that is the complete equality of one or more genes over the whole population: in such case the Crossover cannot change that gene and the only chance to recover a variability is a random mutation.
These features have been implemented in the so called Genetic Algorithm. The genes represent the points of the search space, that is the domain of the Fitness, the function to be maximized. The gene length is related to the resolution needed for the solution, however it is easy to deal with standard sized words, like bytes, or 16 or 32 bit words.
A starting population is built with random genes values and it evolves through several generations in which Selection, Crossover and Mutation are repeated until a satisfactory solution has been found or a maximum number of iterations has been reached. This is the recipe of a classic GA, described in figure 4; in the following section some variations are described, which in some cases can help the velocity of convergence to a good solution.
Some effort has been performed to provide a satisfactory theoretical explanation of a Genetic Algorithm, the Schemata Theorem (Holland, 1975) being one of the most celebrated, though not earning complete acceptance; Genetic Algorithms maintain the status of a mainly empirical optimization technique for a large variety of applications (Davis, 1991).
It is surely useful when the problem under study is not easily treatable through classical technique: e.g. an analytical model may not exist or may be too complex, or the parameters are so many that a mathematical approach would be too time consuming, while a handful of genes can evolve for some tens of generations giving a satisfactory result (Whitley, 1994). www.intechopen.com

Modified algorithms
In these 30 and more years GA have been used in every field of science and techniques, and each researcher, trying to grab the most of his algorithm, gave a contribution to enrich the Nature own recipe by various modifications: there are thus many variants of Selection, Crossover, Mutation and even Genes Representation. A clear view of this sophistications can be found in (Haupt, 2004).
The Genes Representation for example is often carried on the Real Numbers domain instead of the classical string of bits, performing the so called continuous GA. The Gray encoding has been proposed for integer genes in order to have smooth offsprings variations when the classic encoding is unstable: e.g. when parents are around the value 2 N-1 , N being the length in bit, it suffices a little change in the gene value to completely shuffle its binary representation, while with Gray Coding always a unitary change in the value is represented by a variation on one bit only. However this problem should be circumvented by the uniform crossover (see below).
Classical selection is random with probability proportional to fitness (Roulette Wheel Selection), while sometimes the best fitting individuals are priorly selected: this is an Elitist Selection.
Also for the Crossover many variants exist like one cut point, two points, uniform; they are depicted in figure 5. The uniform crossover has the advantage of a large exploring power, i.e. the number of different children that are possible from a given couple of parents: one point crossover can generate 2(N-1) different children, where N is the gene length in bit, while the uniform crossover can generate 2 (N-1) different children increasing dramatically the exploring power. An even greater exploration capability crossover has been investigated in (Coli et al., 1996), where the concept of real-valued GA are used for integer genes. This is based on the interpretation of the classic single point crossover as an arithmetical operation between integer numbers. The cut point (cp) divide a gene x 1 into two substrings that are the quotient and the remainder of the division of x 1 by 2 (N-cp) : classic CO is performed by choosing a random index cp between 1 and N. The generalized CO is obtained allowing the divisor to span over a greater set of values. ( 1, 2 b ∈ and log 2 b MN =     , choose a random index k between 1 and M, and let c=b k , the CO is operated by swapping the remainders of the division by c and then to return to integer numbers by rounding. When b is less than 2 and approaches 1, M becomes greater and greater, i.e. the search space of the CO is incremented. Of course there is a limit given by the rounding effect, for which the optimum b seems to be around 1.05 (Coli et al., 1996).
The result of this generalized CO is a non-random mixing of the two parents that is no more correlated with the bit representation of the genes, but the parents legacy is smeared all over the offspring length. Other topics about tuning a GA include population size, growth and control: a variable population size implies a more difficult memory management, and control of twin genes can be useful in some cases but it overload the algorithm with another function to be performed.
Contaminations with other types of algorithms can be foreseen, like hill climbing or random search making a hybrid GA (Haupt, 2004). Hill climbing can speed up the process of final optimization, while random search prevents local minima trapping.
There is not a complete agreement on the utility of these variations, sometimes different applications require different setup, but this can be seen as another interesting features of the GA. In the following the application to the DOA estimation is described, along with the optimization parameters that have been explored.

GA application to goniometry
The Genetic Algorithm approach has been implemented to the problem of Direction of Arrival estimation through phase interferometry with a Uniform Circular Array. GA have been used to minimize the Mean Square Error and its performance have been compared to a www.intechopen.com standard Steepest Descent Algorithm, both for the DOA accuracy and the computational load. A benchmark is set with the Correlative method, which should guarantee the best performance being supported by the calibration data.
The interferometer is a five element array operating in VHF bandwidth, from 30 to 300 MHz, see figure 6, where additional higher bands arrays are shown too. The VHF array is the largest and has a radius of 1.35 m.

Fig. 6. Five element Uniform Circular Array interferometer
The measured phase patterns, at interval of 4 degrees in azimuth 10 degrees in elevation and 5 MHz in frequency, are stored for the correlation algorithm and are used with additive noise to generate the phase measurements. The theoretical and measured phase differences between adjacent array elements are reported in figure 7, along with the estimated angle at frequency f = 200 MHz.

Genetic Algorithm setup
A Genetic Algorithm has been implemented and optimized versus several parameters using a simple one dimensional DOA estimation, having fixed elevation at zero degrees.

www.intechopen.com
Several runs have been executed with different Population Size and Maximum number of Generations in order to have a reasonable setup of the GA, and the results in terms of DOA accuracy and ambiguity fraction have been plotted in figure 8. The Mutation probability was set to 0.1 and the classic 2-points Crossover has been used.
DOA accuracy is the standard deviation of the DOA error over the whole azimuth and frequency band, while the ambiguity fraction is the number of points with an error greater than 90°, divided by the total number of points.  In figure 9 it is shown the behaviour of the algorithm during the generations. In the left the best fitness is shown (in term of MSE); it can be seen that convergence is very fast as was www.intechopen.com already mentioned. In the middle graph the fraction of clones of the first gene is reported for each generation: after few iterations population becomes quite biased, about 50% of the population is just a silly copy of the best gene. In the right the genes normalized standard deviation is plotted, which has a complementary trend, after few iterations becomes very low, meaning that the majority of the genes are very near to the best individual.

DOA Accuracy vs Population Size
A set of trials has been executed varying the mutation probability, from 0 to 0.9. The results, shown in figure 10, are quite impressive, being necessary to have a great randomness in the GA to work properly. To improve convergence an hybrid random search has been implemented in the GA introducing a renewal of the population: the worst individuals are overwritten with new random genes, and mutation probability has been set to 0.1. The results are very encouraging, the number of generation has been limited to 20, in figure 11 the performance are reported for a 20% population renewal at each generation; population size ranges from 20 to 60 showing better results than previous with less computing power. A similar improvement has been achieved also changing the Crossover operator in order to have a more efficient search space exploration. In table 1 the comparison between the 2 www.intechopen.com points crossover, the uniform and the generalized is reported with and without the population renewal. It seems that this random renewal prevails over the crossover type. With these hints on GA parameters, an operative simulation has been performed over a full azimuth and elevation estimation in presence of noise and in comparison with a standard minimization algorithm.

Results of GA in mean square error minimization
The measured array phase pattern has been used to generate the phase differences to which a Gaussian noise has been added. Given the phase difference measurement vector the Square Error Function (13) can be evaluated for every azimuth and elevation, its minimum indicates the best estimate of direction of arrival.
The standard minimization method is the Nelder-Mead implemented in Matlab. To avoid ambiguity and local minima trapping several starting points have to be selected, the step of this sampling must be large to limit computation resources, but it must be sufficiently small in order to sample the real maximum, because the error function ripple increase with frequency. Genetic algorithms overcome this problem by the global search.
Comparison of performance versus computational complexity is reported in figure 12. The computational load has been evaluated in terms of number of error function evaluation (fitness in the case of GA); for the Nelder-Mead algorithm every starting point gives rise to a process in which several points are evaluated until a convergence to a local minimum is reached, the number of evaluation has been recorded for every tentative starting point. For the GA this is simply the product of population size by the number of generations. Some accessory functions are present in the GA, like the Crossover, but these have an almost negligible computational complexity with respect to the error function calculation.
The superiority of the Genetic Algorithm approach with respect to the Nelder-Nead minimization is evident: the GA is converging with much less operations to about the same performance. A Signal to Noise Ratio equal to 20 dB was selected, then other simulations were performed at different SNR at the same computational load, in figure 13 the results have been plotted. Having fixed the computational complexity to a middle value the GA has better performance especially for what concerns elevation accuracy and ambiguity. However both algorithms are quite good, considering that the estimation does not take into account the pattern non idealities. This could mean that the antenna has a good pattern which resembles an ideal one. To have a confirm the correlative algorithm has been used as a benchmark.

Correlative algorithm
As mentioned before, calibration is a straightforward method to account for phase pattern distortions due to mutual coupling between elements and the effect of the mast and the installation. The correlative algorithm makes use of the stored calibrated patterns building up a correlation with the measured phase vector; the peak of the correlation function gives the DOA estimation. An example is reported in figure 14 from (Dinoi et al., 2008).
Here the phase vector is measured from direction 125° azimuth and about 45° elevation. Correlation spans -5 to +5 because the sum of the 5 channels has not been normalized.
From the figure it is clear that the elevation accuracy is much worse than the azimuth accuracy, and this phenomenon is amplified around the horizontal plane, where most of the www.intechopen.com measurements are taken: the calibrated pattern have been measured at elevation steps of 10° from -30° to 30° and then extrapolated for higher elevations.

Fig. 14. Example of Correlation function
The correlative method is not very robust for low SNR: in fact it is more subject to gross errors or ambiguities than the MSE based algorithms: this means that noise can raise up a secondary maximum of the correlation function to an higher value than the real maximum.
In figure 15 a plot of the minimum SNR required to avoid ambiguities is plotted (red line) together with the minimum SNR to have 1° accuracy (green line) and 2° accuracy (blue line). This plot has been obtained by simulation with ideal patterns. Fig. 15. Minimum SNR to avoid ambiguity versus L/λ ratio (i.e. ∼ frequency) With the real patterns that is more evident: in figure 16 the performance of correlative goniometry with the measured patterns is reported versus SNR. At high SNR this method yields excellent results, but it fails at low SNR, when MSE based methods still work. www.intechopen.com

Conclusion
Genetic Algorithms have been applied to the Direction of Arrival Estimation through a Uniform Circular Array interferometer. After a brief description of the DOA estimation techniques and a view of Genetic Algorithms a sort of parameter tuning for optimization has been performed on a GA; some algorithm variations has been introduced and described. The Genetic Algorithms have been compared to a standard minimization tool, the Nelder-Mead method. The Correlative method, which makes use of the calibrated phase patterns, and thus guarantee the best achievable performance at high SNR, has been used as a benchmark.
Genetic Algorithms reach the same performance of Nelder-Mead optimization technique, but with less computational power. Both techniques reach good performance compared to the correlative method.
The Genetic Algorithms showed a more robust behaviour when low computing power is available, confirming their ability as general purpose optimization tools.

Acknowledgment
I would like to thank my responsible in Elettronica SpA, Daniela Pistoia, head of Research and Advanced System Design, and Graziano Lubello, head of Communication Electronic Warfare Advanced Systems, who allowed me to investigate this interesting field of research. Many thanks also to my colleague Libero Dinoi, who supported me in the subject of Correlative Goniometry. This work has surely grown up from the prolific discussions I had with Michele Russo.