Effects of correlation suppression on datasets acquired from eight subjects showing the percentage increase in amplitude and the change in localization for the left and right auditory (M100) sources when correlations are corrected for. Also shown is the relative strength of mid-sagittal spurious sources when no correlation correction is used, calculated as a percentage of the strength of the weaker of the two auditory sources. The left auditory source was not observed in one subject (row 7).
The objective of biomagnetic neuroimaging is to characterize the spatiotemporal distribution of electrical activity within the human brain by measuring the magnetic field outside the head. Magnetoencephalography (MEG) provides such a tool where measurements are performed non-invasively at numerous locations surrounding the head. The high sensitivity of Superconducting Quantum Interference Devices (SQUID) utilized by current MEG systems combined with advanced hardware and software noise cancellation techniques allow the detection of minute magnetic fields generated by electrical neural activity with good accuracy. The ability to directly observe the neuronal fields allows MEG to have millisecond time resolution, orders of magnitude higher than the time resolution of fMRI and PET. While the electrical signals measured by electroencephalography (EEG) are strongly influenced by inhomogeneities in the head, the magnetic fields measured by MEG are produced mainly by current flow in the relatively homogenous intracranial space, permitting more accurate spatial localization (Hamalainen et al. 1993).
Although electromagnetic fields are fully described by Maxwell’s equations, the problem of obtaining the spatiotemporal distributions of the neural current generators from MEG data (referred to as the inverse problem) poses a challenge since it is inherently ill-defined (Hamalainen et al., 1993). Since the neural current generators lie within the human head, a conducting medium, it is not possible to develop a mathematical model that provides a unique solution. Instead, several methods with different assumptions have been developed, each of which has its own advantages and limitations (Pascarella et al., 2010, Sekihara and Nagarajan, 2008, Wipf and Nagarajan, 2009).
Generally, inverse models in MEG can be separated into two classes: parametric models and imaging models. Parametric models are global solutions that attempt to account for the measured fields in their entirety in terms of a small number of sources. The most popular model in this class is the equivalent current dipole model (ECD), where non-linear least-square fits are used to estimate the parameters of the current dipoles requiring the number of sources to be known
The most popular imaging models are local linear estimators (Greenblatt et al., 2005) that estimate the source activity at points of interest, treating each point as being independent of any other location, as opposed to finding a global solution. This class of solutions attempts to minimize contributions from all other source locations to estimates of source activity at the point of interest, and hence is often referred to as a spatial filter. Spatial filtering approaches can also be used to produce volumetric images of brain activity by choosing a region of interest divided uniformly into voxels often spanning the entire brain. A source is assumed at each voxel and a spatial filter is used to estimate its strength. Two classes of spatial filters are typically used: adaptive and non-adaptive. Non-adaptive spatial filters rely on the forward solution in their formulation of the inverse problem in a manner completely independent of the measurement, while adaptive spatial filters rely on both the forward solution and the field measurements. The popular minimum-norm solution (Hamalainen et al., 1993) as well as the more recent sLORETA (Pascual-Marqui, 2002) can be formulated as a non-adaptive spatial filter, and are therefore examples of this class (Sekihara, 2008).
In this chapter we will focus on one localization modality, the adaptive spatial filter or beamformer. Beamformers come in various forms, each with its own set of advantages and disadvantages in terms of location bias and resolution. The most widely used, the minimum-variance (MV) beamformer, was originally developed in the field of sonar and radar signal processing (Borgiotti and Kaplan, 1979) and later adapted to the bioelectromagnetic inverse problem (Robinson and Rose, 1992, Van Veen et al., 1997). The unit-gain constrained MV beamformer has been shown to be biased with artefacts near the center, while the lead-field constrained and the unit-noise-gain constrained MV beamformer have been shown to be unbiased even in the presence of random noise (Greenblatt et al., 2005, Sekihara et al., 2005). In addition, MV adaptive spatial filters generally attain much higher resolution than non-adaptive spatial filters, with the unit-noise-gain MV beamformer having the highest resolution. In this chapter we will present the formalism for these different kinds of beamformers in their scalar form, then develop the formalism for a vector beamformer (Sekihara et al., 2001, Quraan and Cheyne 2010) and an eigenstate-projected vector beamformer (Sekihara et al., 2001). We will discuss the advantage of the vector beamformer and present a comparison of the scalar and vector beamformers.
Despite their advantages in terms of spatial resolution and localization accuracy, the performance of beamformers is seriously degraded in the presence of sources with high temporal correlations. This has been demonstrated for neural sources that are known to be highly correlated, such as bilateral activation of the temporal lobes during steady-state auditory stimulation (Brookes et al., 2007, Dalal et al., 2006, Popescu et al., 2008, Quraan and Cheyne, 2008, Quraan and Cheyne, 2010), and for primary visual sources (Quraan and Cheyne, 2008, Quraan and Cheyne, 2010), and is likely the case for many other sources as most brain modules are organized with bilateral symmetry. The beamformer’s susceptibility to temporally correlated sources limits the beamformer’s ability to accurately localize such sources, and results in significant attenuation of the source magnitude and, hence, a poor reconstruction of the time courses. In addition, interactions between correlated sources can result in spurious activation patterns. For example, it has been shown that auditory sources result in broad activation patterns in the mid sagittal plane (Van Veen, 1997, Quraan and Cheyne, 2008, Quraan and Cheyne, 2010).
Another major limitation of beamforming algorithms is their inability to spatially filter out strong sources from outside the region of interest, a phenomenon often referred to as
2.1. The scalar beamformer
Beamformers are based on the concept of spatial filtering, where the aim is to pass the signal from the location of interest while blocking signals from all other locations. To achieve this, a weight vector,
for an MEG system with
Defining a lead field vector,
In reality it is impossible to satisfy (3) and fully attenuate all sources outside the spatial location of interest. The exercise of designing an optimum spatial filter, therefore, amounts to minimizing the contribution from sources outside the location of interest. This is achieved either through a MV or least-mean-squares technique, and in all cases the solution to this problem takes the form (Reddy, et al. 1987)
where γ is a scalar whose value depends on the details of the method used, and
2.1.1. Beamformer constraints
The MV based beamformers find a set of weights,
subject to a constraint; the choice of this constraint determines the beamformer properties in terms of location bias, resolution, and the presence of artefacts. One popular choice is the unit-gain (UG) constraint, where the signal from the location of interest,
in order to preserve the source magnitude. While this choice yields a good image resolution, it can be shown to be biased, and can result in strong artefacts near the centre. The constraint
on the other hand, results in the same high resolution and can be shown to be unbiased with no artefacts near the centre (Sekihara, et al. 2005), although it applies a variable gain to the source amplitude depending on its location. This type of beamformer is often referred to as a lead-field (LF) constrained or array-gain constrained. A third type of beamformer, the unit-noise-gain (UNG) beamformer imposes a constraint on the gain such that the weights satisfy
and results in no localization bias, and significantly higher resolution than the other two constraints above.
The bias of the unit-gain constrained beamformer has been corrected in the literature by dividing the source power by the noise power (Van Veen, et al. 1997), resulting in a quantity described in units of pseudo-z scores (Robinson and Vrba 1999). In the presence of uncorrelated random noise that is identical across all channels this modification is equivalent to normalizing the weights so that
where σN 2 is the noise variance, and
In terms of constraints, this means that the normalization in (11) effectively removes the constraint in (7) (or (8)) and replaces it with the condition in (9). The performance of any adaptive spatial filter also depends on the parameters affecting the accuracy of the covariance matrix such as the frequency bandwidth and the integration time. For a detailed assessment of these parameters on the beamformer performance we refer the interested reader to the recent publication by Brookes et al. (2008).
The solution to the optimization problem in (6) can be achieved using the method of Lagrange multipliers which determines γ in (4) to be
For the unit-gain constrained beamformer. The solution with the lead-field constrained beamformer has the same format with
The unit-noise-gain constrained beamformer gives
2.2. The vector beamformer
The formalism presented so far assumes the source direction to be known, and hence applies to a scalar beamformer for which the source direction is determined by a separate procedure. The techniques usually employed to calculate the source direction, however, integrate over a wide time range often involving hundreds of milliseconds. Since sources are typically active over a short range, this results in averaging over significant amounts of noise (instrumental noise, brain noise, leakage) which may or may not be isotropic. This issue becomes particularly significant for small SNR sources. In addition, by integrating over a wide time range one assumes that only one source with a constant orientation is active at a given voxel over the entire time range. Using a vector beamformer one can avoid such assumptions and biases by calculating the three components of the weight vector in order to track the three components of the source activity vector. However, since radial components of source activity inside a conducting sphere are not detectable outside the sphere, in cases where the head model is assumed to be spherical, one can switch to spherical coordinates and track the tangential (
respectively. The minimization in (6) and its constraint (7) now become
yielding the solution
Following the approach of Sekihara et al. (2001) the metric
can be used to construct the functional images.
2.3. The spatiotemporal (event-related) beamformer
For a spatiotemporal or “event-related” beamformer (Sekihara 2001, Robinson 2004) the source activity at a given instant in time can be calculated using (1) for each of the two orthogonal directions, where
2.4. Computation of source orientation
Scalar beamformers rely on a determination of the source orientation to construct the component of source activity in what it deems to be the optimal direction. The source orientation is typically determined either through a grid-search aimed at maximizing source power in each voxel (Robinson and Vrba 1999), or through an eigendecompositon of the source power, where the largest eigenvalue is identified as the one corresponding to the source originating from that voxel (Sekihara, et al. 2001). The eigendecomposition method is essentially equivalent to the grid-search method, but achieves the purpose in a more computationally efficient fashion. In both methods, however, the power is calculated over a wide time range, typically spanning hundreds of milliseconds. Furthermore, the assumption is made that all activity determined by the beamformer to originate from a given voxel is indeed generated by brain sources originating at that location. For cases where the source of interest is weak and in the presence of other strong sources that result in strong leakage into the region of interest, this assumption fails. In other words, in such cases, most of the activity in the region of interest over a long time range results from leakage from strong sources outside the region of interest, and hence, the determination of the source orientation using either of these methods is subject to significant errors. The scalar beamformer, therefore, ends up constructing the component of the source activity in a direction different from that of the source of interest and dismissing what might be a significant component of the source of interest. The vector beamformer, on the other hand, constructs the source activity in the two tangential directions, dismissing only the radial component which is typically small. In many applications of the scalar beamformer, however, the source of interest is the dominant one, and the effect of leakage is small (in the absence of strong external noise artefacts). In this case an accurate determination of the source angle can be achieved, and the construction of the source activity component in only this direction is justified. However, constructing source activity of both tangential components even in such cases guards against dismissing unexpected components, and provides verification that the source orientation was properly computed.
Figure 1 shows a performance comparison of the scalar and vector beamformers in localizing simulated hippocampal sources folded in with real visual data (Quraan et al. 2010c, Quraan et al. 2011). For the vector beamformer, the averaged localization error increases from ~15 mm at 40 nAm to ~18 mm at 20 nAm, while the scalar beamformer results in an averaged localization error of ~18 mm at 40 nAm, increasing to ~27 mm at 20 nAm. Figure 1 (right) shows the number of failures for both the vector and scalar beamformers as a function of simulated source strength, where a failure is defined to be a case where the localization error exceeds 20 mm. At 40 nAm the vector beamformer fails 20% of the time, while the scalar fails 40% of the time. While the failure rate of the vector beamformer remains relatively unchanged as the simulated source strength is reduced to 20 nAm (~20%), the scalar beamformer fails 47% of the time at 30 nAm and 60% of the time at 20 nAm. As the source strength is further reduced, the signal strength drops well below the background noise level, and both beamformers fail equally at identifying and properly localizing the sources.
2.5. Computation of head model
The magnetic fields measured by MEG are produced mainly by current flow in the relatively homogenous intracranial space, and are not significantly affected by the poor conductivity of the scalp. The performance of the inverse solution, however, depends on assumptions regarding the head shape. For example, when a spherical approximation of the head shape is used, a radial source component relative to this sphere will yield a null value outside the head (Sarvas 1988). Since the human head deviates significantly from a sphere, in reality, spherical approximations present a source of error, the magnitude of which is highly dependent on the source location and orientation. The use of boundary element models (BEM) and finite element models (FEM) provides a significant improvement. In such cases a 3D vector beamformer can be used instead of the 2D beamformer discussed above.
3. Temporal correlations
3.1. Effects of temporal correlations on the performance of the beamformer
To demonstrate the effect of temporal correlations on the performance of the beamformer, we simulate two sources in the auditory cortex, one in the right and one in the left temporal lobe. Figure 2a shows the images resulting from this analysis when the sources are uncorrelated, superimposed on a subject’s MRI. The simulated positions of both sources were recovered to within the scanning resolution (0.25 cm) of the beamformer. Also shown in Figure 2a are slices in the xy-plane at the position of the sources (z=4.5). The sources were then made fully correlated and the simulation was repeated with the same parameters. In this case the beamformer finds no peaks, and a z-slice at the position of the two sources looks completely flat with pseudo-z values corresponding to the noise level as shown in Figure 2b. In this case the two correlated sources fully attenuated each other.
The simulation was then repeated with the same parameters as above but this time random Gaussian noise was added at values corresponding roughly to those of a typical MEG system. For the case when the two sources are uncorrelated, they were localized to their simulated position to within the scanning resolution of the beamformer, as shown in Figure 3a. The lower resolution is a result of using much higher noise values compared to the previous simulation. When the sources were made fully correlated, localization revealed three sources instead of two, the source with the highest strength being a broad source in the mid-sagittal plane, as shown in Figure 3b. The next two sources were in the right and left temporal lobes but were poorly localized, appearing ~2 cm away from the simulated position, closer to the skull.
Figure 4 shows another simulation were the two sources were brought closer together so that they are only separated by 3 cm. For the case when the two sources were made uncorrelated, they were localized to their simulated position to within the beamformer scanning resolution, as shown in Figure 4a. When the two sources were made correlated, instead of observing two distinct sources, the beamformer shows only one broad source half way between the other two sources as shown in Figure 4b.
3.2. Correcting for temporal correlations
Few techniques have been recently developed to correct for the adverse effects of temporal correlations on the performance of the beamformer. In this section we discuss and compare four of these techniques.
3.2.1. Higher order covariance matrix
Huang et al. (2007) suggested that the use of a higher order covariance matrix improves the performance of the beamformer in the presence of temporally correlated sources. In particular, a third-order covariance matrix was suggested for optimal performance. This method was applied to the simulations described in the previous section using a third order covariance matrix to calculate the beamformer weights. For uncorrelated sources the result (Figure 5a) was similar to Figure 3a, but the pseudo-z scores were a few percent higher, while the resolution was noticeably lower. The result for the case of correlated sources is shown in Figure 5bb+. We observe that the highest peak belongs to a spurious broad source in the mid-sagittal plane analogous to that of Figure 3b, except that the source here is stronger and broader. While the other two sources are found in the left and right temporal lobes, the localization error is greater than 2 cm, similar to that obtained from the use of a first order covariance matrix. We therefore conclude that this model fails at correcting for correlation effects in beamformer localization.
3.2.2. The dual lead-field approach
Another method proposed by Brookes et al. (2007) relies on calculating a dual lead field based on the two interfering sources
adding yet another parameter,
3.2.3. Selecting a subset of sensors
Another approach that has been used as an alternative to implementing explicit corrections in the algorithms for reducing correlation effects in beamformer localization is to use only the sensors from the left side of the MEG array to analyze left-hemisphere sources and, conversely, right-hemisphere sensors to analyze right-hemisphere sources (Herdman et al., 2003). This approach is mainly applicable to auditory cortex sources with widely separated field patterns. However, a recent publication (Popescu et al., 2008) has shown that such a technique can result in significant localization errors due to incomplete removal of the signal from the contralateral source, in agreement with our own results (Quraan, unpublished).
3.2.4. The coherent source suppression model (CSSM)
A more general model has been proposed by Dalal et al. (2006) which relies on specifying additional constraints in deriving the beamformer weights. As shown in eq. 3, the condition to block signals from outside the location of interest is not explicitly specified, but rather one relies on the minimization in (18) and (19) to achieve it. Since this is not achieved in the presence of correlated sources, one can explicitly add constraints to demand it be achieved. In particular, the constraints
are added to equations (18) and (19), respectively, where
Implementations of this model can therefore allow the user to specify the location of the suspected correlated source to be suppressed, or suppression point,
Figure 6 shows the beamformer localization of the two simulated correlated auditory sources discussed above when applying CSSM to correct for temporal correlations. First, we note that the mid-sagittal source has now disappeared verifying that it is an artefact of the interference effect between the two sources. Second, the strength of the reconstructed sources in the simulated dataset have increased by ~90% and ~130%, since the attenuation effects due to correlations are reduced. Third, both sources were localized to their simulated positions to within the beamformer scanning resolution.
126.96.36.199. Systematic effects of CSSM
To consider the effect of distance between source and suppression point on beamformer source reconstruction, a single source was simulated in the right temporal lobe and random Gaussian noise was added to the simulation at levels typical in an MEG system. The beamformer was then used to localize the source first with no suppression points, then a suppression point was added at a position 9 cm away from the source and the localization procedure was repeated. This was repeated with the suppression point brought closer to the source as illustrated in Figure 7b. Figure 7a shows the percentage reconstructed source strength (i.e. reconstructed source strength with suppression relative to no suppression) with the suppression point at various distances along this line. The figure shows that even when the suppression point was more than a few centimetres away from the interfering source, a large portion of this source was still suppressed. While the amount of suppression applies to the simulation at hand (i.e. the actual amount depends on the number of sources and their locations, the orientation of the sources, the amount of noise, and the parameters used in the suppression model, etc), the general effect holds; namely the suppression point has a long range effect, and the amount of suppression will increase with decreasing distance. This indicates that this method is effective even if we do not have precise information regarding the location of the correlated (interfering) source. However, this has a negative consequence as well; we saw that in this example even when the suppression point was 7 cm away from a source of interest, that source was suppressed by ~20%. This issue becomes more problematic when the interferer and source of interest are close together. In this example, when the correlated bilateral sources were a distance of 3 cm apart, and the suppression point was placed at the location of one of the two sources, the source of interest was suppressed by ~50%. In such cases one can achieve a better outcome by choosing a suppression point not to coincide with the interferer, but rather some distance away in a direction opposite to that of the source of interest. In this example, if one chooses a suppression point 1 cm away from the interferer (4 cm away from the source of interest), the interferer will be suppressed by ~80% while the source of interest will only be suppressed by ~30%. This long range effect of suppression points poses a serious limitation to this model with respect to general applications (such as grid search algorithms), but allows its application to specific cases where the suppression point(s) are carefully chosen for the case at hand. It should be noted however, that spatial localization of the source, regardless of distance from the suppression point, remained within the scanning resolution (0.25 cm) of the beamformer in these simulations in all cases except when the attenuation exceeded 80% (i.e. when the suppression point was within 1 cm from the source to be localized).
Other systematic effects of the CSSM model were also considered and evaluated in Quraan and Cheyne (2010), including the effect of distance between suppression point and interferer, the effect of the size of the suppression region, the effect of reducing the suppression region using singular value decomposition and the effect of resolution of the suppression region.
188.8.131.52. Example from Auditory data
Table 1 shows the results from datasets acquired from eight subjects listening to a tone, analyzed with and without correlation suppression. Significant mid-sagittal artefacts are seen in seven out of the eight datasets, with the mid-sagittal spurious sources peaking higher than one of the two auditory sources in three out of the eight datasets. Significant increases in source amplitude are observed when correlation suppression is used in all cases, ranging from 17% to 189%. We also note the uneven suppression of left and right sources, and the absence of one of the bilateral sources in one subject as a result of correlations. We also note the uneven suppression of bilateral sources which is highly significant as laterality indices are often used to assess cognitive function and is sometimes used to aid presurgical mapping. Changes in source localization of up to 1.1 cm are also observed.
Figure 8 shows the group averaged functional images from all 13 subjects that participated in the same tone perception task study as well as in a vowel-speak condition (Beal et al. 2010). We note that the reconstructed sources were localized to the auditory cortex with a high degree of symmetry consistent with other neuroimging studies of M100. We also note that the time courses of the left and right auditory sources are highly correlated and peak at ~100 ms, as expected.
|Amplitude change (%)|
|Location change (cm)|
|Spurious source amplitude (%)|
4.1. Effects of leakage on the performance of the beamformer
The purpose of a beamformer is to block signals from outside the voxel for which the source magnitude is being determined; in the presence of noise or other strong signals, however, beamformers can only partially attenuate such signals. Strong sources from other locations may make significant contributions to the source activity, thereby limiting the localization accuracy or even obscuring a weak source. Reddy et al. (1987) derived an expression for the power output of two fully uncorrelated point sources and showed that in the presence of noise, the complete blocking of signals from outside the location of interest may not be achieved. Leakage presents a serious challenge for the beamformer when the source of interest is a weak source and other strong sources are present, a case that is typical in cognitive paradigms that rely on sensory inputs to evoke much weaker activations in structures such as the hippocampus, and the amygdala. Figure 9 shows beamformer localization of two uncorrelated simulated point sources (indicated by the red dots) where random Gaussian noise is added at typical MEG noise levels. While the peak activity is detected within the beamformer scanning resolution (5 mm), broad bands of activity are observed surrounding the actual point sources confirming the beamformer’s inability to block the strong auditory sources from leaking into the surrounding volume. In cases where the source of interest is a weak source lying in the vicinity of these leakage patterns, the detection and accurate localization of such source will be compromised.
Figure 10a shows a group-averaged glass-brain plot of source activity in the Talairach coordinate system for sources simulated at 40, 30, 20, and 10 nAm, where the simulated signals were folded in with visual data. The group averages include datasets from 15-subjects with 150 trials/dataset. As can be seen from this figure, the 40 nAm hippocampal activation was detected as the strongest activation, but at 30 nAm the primary visual activation was the strongest. Although hippocampal activation is still visible, a leakage pattern resulting from the activation of the visual cortices and extending all the way to the hippocampus is also visible. At 20 and 10 nAm the hippocampal signal is no longer visible due to leakage from the primary visual sources. Since most hippocampal signals are below the 30 nAm range, the ability to properly detect and localize small signals is crucial (Quraan et al. 2011).
4.2. Reducing the impact of leakage on the performance of the beamformer
An approach that is highly effective in reducing the effect of leakage from the dominant sensory sources when higher cognitive processes are of interest is achieved by designing experiments that include appropriate control conditions. We have used this approach (Quraan et al. 2010b, Quraan et al. 2010c, Quraan et al. 2011, Mills et al., submitted) to investigate hippocampal activations. In such experiments the condition of interest (which we will call the
Figure 10b shows the
4.2.1. Data subtraction techniques
In the discussion above we demonstrated that subtracting source images reconstructed from
We compared the performance of these three subtraction techniques by simulating hippocampal and medial prefrontal sources of various strengths (Mills et al., submitted). The simulated sources were embedded in real visual data obtained from 22 subjects at a latency of 300 ms, where the brain noise background from the visual activity has largely subsided, and at 100 ms, where the visual activity is at its peak. Three assessment measures were used in this comparison: the strength of the weakest source detected by each technique, the relative strength of signal and noise and the localization accuracy. Figure 12 shows the results for the low (top) and high (bottom) brain noise background conditions, respectively.
For the low brain noise region we were able to detect both the medial prefrontal and the hippocampal source to below ~5 nAm with all techniques. However, IS yielded the least accurate localization of all three techniques, particularly for the hippocampal source. For example, for a source with a strength of 10 nAm the error exceeded 20 mm for the IS technique, SUD gave a localization error of ~13 mm, and SAD ~8 mm. For the hippocampal source, the SAD and SUD techniques were more effective at localizing the deep hippocampal source than the IS technique in the low brain noise condition.
For the more challenging high brain noise condition, the post-localization subtraction techniques outperformed IS for detection of the medial prefrontal source. This simulated source was only detected down to ~12 nAm with the IS technique, while both the SUD and SAD techniques were able to detect the source with a strength below 5 nAm. Similarly, the intersection of the signal and noise curves and localization error showed IS to be the least accurate. In the case of the hippocampal signal under the high brain noise condition, the difference between the techniques was substantial when considering the intersection of the signal and noise curves. For IS the intersection occurred at ~18 nAm, compared to ~7 nAm for SAD and ~2 nAm for SUD. For the localization error, IS showed the least accuracy, while SUD and SAD were comparable.
4.2.2. Emperical data example
The SUD and SAD techniques can be combined to capture their advantages by averaging the control block first then subtracting this average from every trial in the experimental condition. We applied this analysis technique to real data obtained with a transverse patterning paradigm, which requires memory for relations among stimuli and has been shown to depend upon the integrity of the hippocampal system (Driscoll et al., 2005, Rickard et al., 2006). A control task corresponding to this paradigm is known as an elemental task, and can be solved by learning about the individual elements rather than the relations among them, and may not rely on the hippocampal system to the same extent (Hanlon et al. 2003, 2005, Moses et al., 2009). For this task group averages across 16 adult datasets were constructed. Images from the individual subjects were averaged over 40 ms time ranges with a 20 ms sliding window covering the range of 80 to 720 ms, resulting in 32 group averages (Figure 12).
4.3. Caveats and systematic effects
4.3.1. Experimental design
It is important to note that these analysis techniques depend on compatible experimental and control paradigms, where the two paradigms are as equivalent as possible except in the experimental paradigm’s ability to evoke the weak activation of interest. This includes the background activations that both paradigms evoke, as well as the timing of these activations. Nonetheless, this does not require that the resulting brain activity from the two conditions differs only by one feature. In fact, the main purpose is to subtract the dominant sensory sources that result in most of the leakage. If several weak sources still exist, the beamformer will be able to localize them more accurately than in the presence of the dominant sources. Even if the dominant source is not fully subtracted out, a significant reduction of its magnitude is advantageous as the leakage from a source increases with
source strength. Generally, differences between experimental and control conditions may arise from changes in the basic sensory processes rather than the higher cognitive process of interest. This emphasizes the need for careful paradigm design and multi-modality validations as different imaging methods are subject to different systematic effects. When differences between experimental and control paradigms are observed at the sensor level, it is not possible to differentiate between these two cases as the sources responsible for these differences are unknown. However, when these differences are localized (i.e. in source space), we are able to tell whether the intended outcome of the paradigm (e.g. increased hippocampal involvement) has been achieved. For the case above, for example, we were able to localize hippocampal and frontal activations in both the n-back and the transverse patterning paradigms.
4.3.2. Head motion
Differences in head position relative to the sensors between the experimental and control conditions represent a source of error for the pre-localization subtraction techniques as data subtraction occurs prior to MRI-MEG co-registration. However, the use of head motion correction techniques, which has been demonstrated to have an accuracy below 2 mm (Taulu et al., 2005, Wilson et al., 2007), will alleviate such shortcomings. The same algorithms can also be used to reference the head position relative to the sensors from multiple datasets to the same frame, allowing multiple datasets to be added or subtracted with high precision. This allows referencing the experimental and control blocks to the same frame, allowing the use of pre-localization data subtraction with improved accuracy. While specialized hardware and software for head motion correction is available for head-tracking, most MEG systems currently in use do not have this capability. In cases where the experimental and control condition can be interleaved, however, this issue would no longer be a concern. Furthermore, in protocols where multiple trials are averaged, the smearing in source position that results from head position differences among trials, not only increases the localization error, but also reduces the ability to detect weak sources.
4.3.3 Leakage from other sources of noise
Strong sources of noise other than brain noise can also result in significant leakage and hinder the detection and localization accuracy of weaker sources. Signals resulting from eye movement and eye blinks typically result in signals much larger than brain signals. Similarly, artefacts resulting from braces and fillings result in large signals. Independent component analysis (ICA) techniques can help remove some noise components. Open source packages such as EEGlab can be used to remove such artefacts prior to applying a beamformer solution, although regularization will be needed as the removal of components results in a reduction of the covariance matrix rank.
We conclude that in the absence of robust inverse models that are able to accurately localize sources with good precision and high resolution, and yet be robust to interference from other sources be they correlated or uncorrelated, the analysis of MEG data requires careful validations. In this respect, many of the inverse models typically employed are complementary, often requiring the use of multiple techniques in order to arrive at proper conclusions. Whenever ambiguities are present such as in the case of weak or deep sources, or when using imaging techniques that are known to suffer from spatial biases and low resolution, or in the presence of correlations when beamformers are used, it is prudent to carry out the necessary corroboration procedures. Realistic simulations are a necessary tool to quantify the performance of the various inverse models and analysis techniques, including what brain activity can and cannot be detected. It can also shed light on the experimental parameters needed to optimize the experimental design.
While beamformer techniques have been gaining popularity in the MEG field due to their ease of use, their limitations and drawbacks have often been largely ignored leading to inappropriate use of these modalities. Beamformer susceptibility to correlation effects has strong consequences on the ability to properly reconstruct brain activity as it results in significant attenuation of temporally correlated signals, false positives, and localization inaccuracies. The often uneven attenuation of correlated bilateral sources leads to unreliable laterality information (e.g. laterality index). In addition, this attenuation (which varies depending on signal to noise ratio as we have shown here), results in altering the source time course and any measure derived from it such as frequency spectra and functional connectivity measures.
Similarly, leakage effects which result in broad reconstructed activation patterns as a result of the beamformer’s inability to block signals from outside the voxel for which the source strength is estimated often lead to misinterpreted results. In fact, one of the most commonly cited advantages of spatial filtering techniques is their ability to reconstruct spatially distributed sources, as opposed to dipole fits which assume a point source. In reality, in the presence of typical MEG instrumental noise (as well as background brain noise) resolution limitations (e.g. as a result of leakage) result in a wide source spread obscuring the spatial spread of actual brain sources. In addition, leakage from the strong sensory activations, which, as we have seen in this chapter in the context of auditory and visual activations, span a large volume surrounding the sources of interest hindering the detection of weaker sources such as memory activations originating in the hippocampus.
Spatial filters of the adaptive and non adaptive type are local linear estimators that estimate source activity at a given location, completely independently of any other location. In this respect, no attempt is made to account for the measured fields in their entirety. If the magnetic fields at the sensors are to be calculated from the beamformer solution from every voxel, the calculated field will exceed the measured field by a large amount due to resolution/leakage effects. In this respect no acceptable χ2 (or goodness of fit) measure can be achieved making it difficult to differentiate between acceptable and unacceptable solutions based on their ability to reproduce the measured fields. Acknowledging this fact, in this chapter (and other publications) we relied on extensive realistic simulations to verify beamformer solutions. Such simulations are a necessity for verifying the outcome of any mathematical model, and not least a model where no quality assurance metric is employed.
While we have shown in this chapter that models and analysis techniques are available to work around these difficulties, in closing we emphasize that none of these techniques provide a robust solution. Appropriate validations based on the use of multiple inverse models and simulations are necessary to validate experimental outcomes.
The author would like to acknowledge Dr. Margot Taylor, Dr. Sandra Moses and Dr. Deryk Beal for providing the data and for many interesting discussions; Mr. Travis Mills and Mr. Marc Lalancette for many interesting discussions and for help with data analysis.