Electrophysiology of Autism

Electroencephalography (EEG) and magnetoencephalography (MEG) are noninvasive methods of measuring activity in the brain. Both methods have been used extensively in autism spectrum disorders (ASD), revealing differences in cognitive processing in individuals with ASD compared to typically developing controls. Both EEG and MEG measure synchronized neural activity in the brain with excellent temporal resolution, with EEG measuring electrical potentials on the scalp and MEG measuring the magnetic field outside the head produced by current flow in the brain [5]. Processes in ASD that are commonly investigated with EEG and MEG include auditory processing, face processing, visual processing, speech and language, and social cognition. EEG and MEG can be used to assess simple sensory processing as well as higher-order cognitive processing and can often reveal information that is unavailable from behavioral tasks. Many studies have found differences in brain activity measured by EEG and/ or MEG in spite of equivalent performance between ASD and control groups. As such, while behavioral performance may appear normal, measures of brain activity may reveal compen‐ satory processes masking abnormal brain activity [6].


Introduction
The purpose of this is to give an overview and summary of findings for event-related potentials, event-related spectral perturbances and spontaneous electrical activity findings in ASD to date. The topic of epilepsy, although relevant to the electrophysiology of autism, is beyond the scope of this chapter. Interested readers are directed to other, recent reviews on this topic [1][2][3][4].

EEG and MEG in Autism Spectrum Disorders (ASD)
Electroencephalography (EEG) and magnetoencephalography (MEG) are noninvasive methods of measuring activity in the brain. Both methods have been used extensively in autism spectrum disorders (ASD), revealing differences in cognitive processing in individuals with ASD compared to typically developing controls. Both EEG and MEG measure synchronized neural activity in the brain with excellent temporal resolution, with EEG measuring electrical potentials on the scalp and MEG measuring the magnetic field outside the head produced by current flow in the brain [5]. Processes in ASD that are commonly investigated with EEG and MEG include auditory processing, face processing, visual processing, speech and language, and social cognition. EEG and MEG can be used to assess simple sensory processing as well as higher-order cognitive processing and can often reveal information that is unavailable from behavioral tasks. Many studies have found differences in brain activity measured by EEG and/ or MEG in spite of equivalent performance between ASD and control groups. As such, while behavioral performance may appear normal, measures of brain activity may reveal compensatory processes masking abnormal brain activity [6].
A commonly used measure is the event-related potential (called the event-related field in MEG), which is a waveform created by averaging brain activity recorded with EEG or MEG across multiple trials of a particular task. As such, ERPs are time-locked to the presented stimuli. ERPs have made significant contributions to the current understanding of ASD and have been used extensively as measures of brain function in ASD and in understanding behavioral components of the disorder. In addition to ERPs, the frequency content of brain activity measured by EEG and MEG can be investigated by focusing on event-related spectral perturbances (ERSPs). These measures have become increasingly common as more sophisticated analytical methods are developed and allow for identification of spectral changes that are both phase-locked and non-phase-locked to stimuli.

Event Related Potentials (ERP) and Event-Related Spectral Perturbances (ERSP)
Event-related potentials (ERPs), called the event-related field (ERF) in MEG, are created by averaging the response to stimuli across multiple trials. The type of stimuli and experimental paradigm used determine the nature of the observed ERP components and the use of common ERP naming conventions allows for results to be compared across studies. ERPs serve as physiological markers thought to reflect processing of both sensory and higher-level cognitive information and are defined based on polarity (positive or negative) and latency (timing poststimulus). For example, the N100 is a negative potential (in EEG; polarity is not reflected in MEG) that is seen around 100 ms post-stimulus (see Figure 1). Components are named based on the time at which they occur (e.g., 100 ms) or the order in which they are seen. As such, the N100 may also be referred to as the N1, as it is the first negative component seen in the waveform. To indicate that it is a magnetic event-related field (ERF), the N100 is called the M100 when measured with MEG. Latency of an ERP/ERF indicates the time course of the neural activity, while amplitude is thought to reflect the allocation of neural resources necessary for a particular task [7,8].
ERP waveforms from a commonly used paradigm, the auditory oddball task, are shown in Figure 1. In this task, study participants listen to a series of sounds in which there are both infrequent ("rare") and frequent ("standard") stimuli. In the example in Figure 1, rare stimuli were 1000 Hz tones (25% of stimuli) presented among the standard stimuli consisting of 500 Hz tones (75% of stimuli). This task elicits a clear N100 response to both types of stimuli, but the P300 component is much larger to rare stimuli compared to standard stimuli. This type of paradigm also allows for the detection of the mismatch negativity (MMN) component, which is a response elicited by changes in stimuli. This component is derived by subtracting the standard waveform from the rare waveform, as can be seen in Figure 1.
Over the past 10 years, while signal averaged ERP analyses have continued to develop with increased sophistication, there has also been increased awareness of the loss of information in MEG and EEG due to time-domain averaging. Partly motivated by time-frequency transformations and by the development of metrics to assess the underlying assumption of time/ phase-locking of the response to a stimulus, researchers have focused more recently on stimulus-related change in the frequency content of EEG/MEG data. With time-domain averaging, any jitter in the phase of the EEG response with respect to the onset of the stimulus or response causes a reduction in the amplitude of the ERP, or spreading of the amplitude, and with enough jitter, will reduce the ERP to the noise floor. ERP phenomena are therefore reliant on a relatively high degree of phase-locking (often called time-locking in the ERP literature) and high inter-trial phase consistency. Responses that are highly phase-locked are considered evoked responses. There are, however, legitimate stimulus-related EEG signals that appear to be inherently non-phase-locked (i.e., the signal change is present, but varies from trial to trial). Such non-phase-locked events are known as induced responses. While averaging such responses in the time domain tends to eliminate them, averaging of induced responses can be achieved by first transforming the single trial data into the frequency domain, then averaging. Figure 2 illustrates two types of ERSP, one highly phase-locked and one not, from MEG experiments. Figure 1. Event-related potential (ERP) waveforms. Responses to an auditory oddball task, in which infrequent (rare) stimuli consisting of 1000 Hz tones (70 ms duration; 25% of stimuli) were presented among frequent (freq) 500 Hz tones (70 ms duration; 75% of stimuli) binaurally. Participants responded to rare tones with a button press. The ERP response to the rare tones is shown in red, with the response to the frequent tones in blue. Common ERP components (P1, N1, P2, P3) are labeled. The mismatch negativity (MMN) is shown in green and is derived by subtracting the response to the standard tones from the response to the rare tones. Data shown are adapted from McFadden et al. [177].
Various analyses of spectral changes evoked or induced by stimuli or associated with responses have been collectively termed event-related spectral perturbances (ERSP) by Makeig and colleagues [9]. The term ERSP encompasses the earlier and still often used terms eventrelated desynchronization (ERD) and event-related synchronization (ERS), terms that describe stimulus-related decreases and increases in spectral power, respectively. ERSP, ERD and ERS phenomena can be evoked (phase-locked), induced (non-phase-locked), or both. Event-related spectral perturbances (ERSP). Auditory response to 500 ms, 40 Hz modulated stimulus (Auditory, left column) and sensorimotor response to unilateral index finger movement (Motor, right column) are shown. Top row = evoked power, relative to prestimulus baseline. Second row = total, or induced power, relative to baseline. Third row = phase-locking factor (PLF). Bottom row = time-domain averaged evoked field (ERF). For rows 2 and 3, ERS is shown in red colors and ERD in blue colors. For the auditory stimulus, the baseline period for relative power and DC offset is the -200 to 0 ms period preceding the stimulus. For the motor ERSP, the baseline period is -3000 to -2000 ms. For the auditory steady-state response, which is highly phase-locked (see third row), the gamma response at 40 Hz is clearly evident in the evoked power, but can also be seen in the time-domain average. The motor beta ERD and ERS responses, which are not phase-locked, are best visualized in the induced, or total power. Note that the low frequency motor evoked field, clearly observed at time 0 in the motor response, is phase-locked and seen in the PLF, evoked and total power.

Event-Related Potentials in ASD
Several ERP components have been extensively investigated in ASD and the following sections will detail the cognitive processes thought to be involved in each, as well as how these components have been used to assess cognition in ASD.

N100/M100
The N100 and its magnetic equivalent, the M100, is usually seen between 60 and 160 ms [10] and is thought to reflect processing of basic sensory (mainly auditory) features [11][12][13][14]. The N100/M100 is thought to be primarily generated by auditory cortex [15][16][17]. While the N100 is often broken into multiple components, the M100 (measured by MEG) is a more straightforward measure of activity from primary auditory cortex [11,18,19]. The N/M100 is often elicited by auditory stimuli and a common paradigm used to measure the N/M100 is the auditory oddball paradigm, in which participants hear infrequent ("rare") sounds among frequent ("standard") sounds (see Figure 1). However, the N/M100 can also be seen in response to stimuli in other sensory domains, such as visual or somatosensory stimuli [11].
Many studies have investigated differences in N/M100 latency in ASD, although results are somewhat inconsistent. While most studies have found no differences in N/M100 amplitude between individuals with ASD and typically developing controls [20][21][22][23][24][25][26][27][28], one study found individuals with ASD to demonstrate decreased amplitude compared to controls [29] and two others found increased amplitude in ASD [30,31]. Strandburg et al. [30] suggested the reason for their finding of increased N100 amplitude in ASD could be due to the complexity of the stimuli used. Their study employed a number of information processing tasks that were more difficult than the simple auditory oddball paradigms used in many of the studies finding either decreased or equivalent N100 amplitude in ASD groups. Similarly, Oades et al. [31], who also found increased N100 amplitude in individuals with ASD compared to controls, also hypothesized that the tasks used may have been too difficult for the ASD population included in the study. While they used an auditory oddball task, participants were required to attend and respond to stimuli throughout the task, while many others employ a passive task. Oades et al. also found the ASD group in their study to show worse performance during the task, indicating that the increased N100 amplitude may reflect difficulties in attending to stimuli. Attention deficits are often seen in ASD [32][33][34] and the N/M100 is thought be impacted by early attentional processes [7,11,35,36].
Interesting results have also been found in regard to N/M100 latency. While some studies identified no differences in N/M100 latency between individuals with ASD and typically developing controls [21,26,27,37,38], others have found either delayed [14,28,[39][40][41][42] or earlier N/M100 latency in ASD [20,25,31]. Given the consistency of the finding, delayed N/M100 latency has been suggested as a potential biomarker for ASD [14,42]. It is thought that this could reflect abnormal auditory processing in individuals with ASD, at the level of simple sensory encoding. Consistent with this hypothesis, many studies have found evidence of abnormalities in auditory processing in ASD [6,12,43,44]. As with abnormal N/M100 amplitude, this abnormal latency could reflect impairments in early attentional processes in ASD. Additionally, it has been suggested that N/M100 latency delays in ASD may be a marker of language dysfunction in this population [14,42].
The differences among studies are not surprising, given the myriad of differences among study designs. While some studies focused on low-functioning individuals with ASD, other focused on a high-functioning population. Additionally, disparities between studies can be attributed to differences in age groups studied and differences in experimental paradigms used. While the N/M100 is reliable in adults, it has not been found to be as consistent in children, due to developmental aspects of the response. The N/M100 response is sometimes not evident in young children [45] and has been found to show increased amplitude and decreased latency with increasing age [17,18,38,46], thought to be partly related to ongoing myelination of white matter [18,47]. This reduced latency in aging may also be delayed in children with ASD [40].
Additionally, Schmidt et al. [48] found that children with ASD fail to show the anticipated asymmetry in M100 generator location. Typically developing controls show M100 generator location asymmetry such that the location in the brain is more anterior in the right hemisphere than the left hemisphere [49]. However, Schmidt et al. found children with ASD to demonstrate an absence of this asymmetry [48]. Schmidt et al. also found the degree of asymmetry to be associated with behavioral measures of language function, suggesting a relationship between the M100 and language abilities. This is noteworthy as language impairments are a core symptom of ASD [50]. As such, it has been suggested that M100 abnormalities may be related to language impairments rather than specifically to ASD [48]. However, Roberts et al. found that children with ASD demonstrated M100 latency delays regardless of whether they had language impairments or not [14], suggesting this response to be specifically related to ASD rather than simply to language impairment. Supporting this, Roberts et al. have also found that children with specific language impairment who do not have ASD do not show M100 latency delays [51].
Given that individuals with ASD are thought to have abnormalities in auditory processing [6,12,43,44], the MMN can be a useful tool in this population as it can be used to assess various components of auditory processing. The MMN can be used as a measure of the ability to discriminate between various simple or complex sounds [52], which could identify areas of processing potentially leading to the speech and language difficulties common in ASD. In addition to being elicited by changes in simple stimuli, such as tones, the MMN can also be seen in response to higher-order discrimination paradigms, such as detecting grammatical errors and discriminating between words and pseudo-words [53]. Since the MMN can be elicited in the absence of attention, this also makes it a particularly useful measure for ASD populations [52]. Attention deficits are often seen in ASD [32][33][34] and if attention is necessary for measurement of a particular component, this can present challenges. The MMN provides a way of avoiding those challenges, as it can be measured while participants are reading a book or watching a movie. It has also been suggested that the MMN be used for early identification of hearing disorders [52] as it can be seen in populations as young as newborn infants [58]. It stands to reason that this component could also be used to assess auditory processing deficits in children with ASD who have not yet acquired language capabilities.
As with other components, MMN/MMF findings in ASD have been somewhat inconsistent. While some have found amplitude to be increased in individuals with ASD compared to typically developing controls [20,59], others have found amplitude reductions [6,26,59,60] or amplitude measures comparable to controls [24,34]. In part, these differences depend on the stimulus used; changes in pitch have shown both increased [20,59] and decreased amplitude [6,26], and changes in duration have demonstrated reduced MMF amplitude in ASD [59]. Indeed, Lepisto et al. [59] found increased MMN amplitude in children with ASD to pitch changes, but decreased MMN amplitude for changes in duration of stimuli. MMN latency has similarly be found to be both shorter [45], longer [60][61][62], or comparable between individuals with ASD and controls [24]. Again, this could be due to the stimuli used (e.g., tone stimuli vs. speech stimuli, discrimination between different pitches vs. different durations). Another potential reason for disparities between studies is differences in the ASD populations included. While some studies have focused on MMN/MMF in low-functioning individuals with ASD [20,26], others have focused on high-functioning individuals [24,34].
These MMN/MMF differences seen in ASD could indicate impairments in the ability to discriminate between stimuli [6,53] or impairments in sensory memory [53,55]. Additionally, since some studies have found attention attenuates group differences in MMN [6,24], abnormalities in MMN amplitude and/or latency could result from attentional deficits. Dunn et al. [6] addressed the impact of attention on these differences in children with ASD compared to control children. They found that when children were not attending to stimuli, children with ASD demonstrated reductions in MMN amplitude compared to control children. However, when children were instructed to actively attend to stimuli, MMN amplitude was similar between groups. This is consistent with results from Kemner et al. [24], finding similar MMN amplitude in children with ASD compared to controls during a task in which they were actively attending to stimuli. While attention is not necessary to elicit the MMN, attention does modulate the component [63,64]. As the MMN reflects the ability to automatically switch attention, this could be one aspect of attention that is impaired in ASD; it could be that individuals with ASD need to specifically attend to stimuli to discriminate unanticipated changes, while typically developing individuals are able to detect these changes automatically [6].
It has been suggested that these impairments in auditory discrimination, auditory sensory memory, and attention switching may contribute to the development of language impairments in ASD [6,43,47,59]. While Dunn et al. [6] did not find MMN amplitude to be related to language abilities in children with and without ASD, Roberts [47,62] has suggested that the MMF latency may serve as a biomarker of language impairment in ASD. Alternatively, it has been suggested that rather than being deficient in auditory discrimination, individuals with ASD may have enhanced auditory discrimination skills for particular stimulus features such as pitch [55,56,59,65]. This is consistent with studies finding increased MMN amplitude [20,59] or shorter latency [45] to pitch discrimination tasks. It has been suggested that enhanced auditory discrimination skills could also lead to language impairments, through difficulties focusing on relevant details and ignoring irrelevant details when listening to speech sounds [56]. This enhancement has also been hypothesized to potentially underlie auditory hypersensitivity seen in ASD [56,59]. Indeed, delays in MMF latency seen in children with ASD compared to typically developing controls were found to be greater in children with language impairments [62].

N170/M170
The N/M170 component is thought to be important in face-specific processing [66][67][68][69][70] and appears to be a sensitive measure of impairment in the early stages of face processing [67,71,72]. This component is seen around 170 ms post-stimulus [66,71,73]. While it is also elicited to non-face stimuli [72], the N170 shows the shortest latency and greatest amplitude in response to faces [71][72][73][74][75]. The N170 is still seen when faces are inverted, albeit with delayed latency and greater amplitude [71,73,74], suggesting that the N170 is involved in simply recognizing something as a face rather than attempting to recognize a particular face [66,71,73]. Supporting this, viewing familiar or repeated faces does not result in shorter N170 latency [67,71,76,77]. Of note, the N170 responds to viewing human eyes with the greatest amplitude and shortest latency, leading some to hypothesize that this component is specifically related to eye detection [66,76]. However, others suggest that the N170 is not specific to faces at all, but reflects expert processing and is seen in response to human faces given that most people are experts in the recognition of faces [67,[78][79][80].
Studies have found abnormal amplitude and latency of the N170 component in individuals with ASD, but results have been inconsistent [67,81]. McPartland et al. [72] found adolescents and adults with ASD to show delayed N170 latency in response to face stimuli compared to a control group, but this group difference was not seen in response to objects. While other studies have also seen this N170 latency delay in adults with ASD compared to controls [82][83][84], others have failed to find a group difference [68,69]. In some studies, this reduction in N170 latency in ASD was found to generalize across face and non-face stimuli, suggesting slower general processing speed rather than face-specific processing deficits [80]. In addition, studies have found control adults to demonstrate the typical latency delay to inverted faces compared to upright faces, but find this effect to be attenuated in ASD [72,81]. This lack of an inversion effect has been suggested to reflect less sensitivity to the configural properties of faces [72] and an atypical approach to face processing in ASD [85]. That individuals with ASD do not show the typical additional processing in response to inverted faces supports the theory that individuals with ASD may process the component parts of faces individually [67,85]. Rather than necessarily reflecting a deficit in global processing, it has been suggested that this reflects individuals with ASD demonstrating superior processing of facial features [67], which has been previously demonstrated [86].
Reduced N170 amplitude in adults with ASD compared to controls has also been seen in some studies [69,83,87], but not others [72,84,88]. It has been suggested that group differences in N170 amplitude are only seen when task demands are high [81], perhaps reflecting an impact of attention on the component, as attention is often impaired in ASD [32][33][34]. Churches et al. [87] found N170 amplitude to increase in control adults when asked to actively attend to faces, but failed to see a similar attention-related increase in amplitude in adults with ASD. They hypothesize that face processing is enhanced by attention in typically developing individuals, but that individuals with ASD may not actively attend to faces in the same way and thus do not benefit from attention-enhanced face processing. However, this theory requires further investigation as other studies have found that directing attention to faces appears to result in N170 latency being "normalized" in ASD, with no group differences seen between ASD and controls when attention was directed to the eye region of the face stimuli [68]. Churches et al. [81] also found that the difference in N170 amplitude between individuals with ASD and controls was larger when stimuli were non-face-like objects compared to the group difference seen for faces or face-like objects. This could suggest that the face detection system in those with ASD is perhaps more specific to faces and less capable of generalization to non-face objects. Churches et al. [81] suggest that this reduced generalization may contribute to social impairments as individuals with ASD may be less able to generalize face processing across different circumstances (e.g., different lighting, distances, or orientations).
It has long been thought that individuals with ASD have deficits in face processing [67,75], but it is as of yet unclear if this deficit is during early processing or at a later cognitive stage of face processing [80]. Face processing impairments in ASD include abnormal eye contact [75,89], reduced attention to faces [75], deficits in face recognition [75,76,81], and impairments in social orienting [76]. These impairments are thought to be early indicators of ASD, since many can be seen by age 2 to 3 [75,76,85]. It has also been suggested that reduced social motivation in individuals with ASD may lead to reductions in eye contact and social orienting [67,76]. Conversely, reduced eye contact early in development is thought to impact development of social communication skills in ASD [75,90]. Eye contact is a crucial component of social communication and a lack of eye contact is included in standard diagnostic criteria for autism spectrum disorders [50]. Atypical patterns of eye contact in ASD can be seen as early as the first year of life [90][91][92][93]. However, it has recently been suggested that face processing deficits in ASD may be overestimated in the literature [67,80] and that rather than face-specific processing being impaired in ASD, difficulties in processing faces may be related to broader deficits in visual processing [70,94].
Differences in results across studies could stem from a multitude of reasons, such as the range of function of participants in differing studies (e.g., varying IQ and verbal abilities), the type of control groups used in different studies (e.g., matched by age, matched by verbal ability), and the variability between tasks used in each study (e.g., the type of objects compared to faces, task difficulty) [68,70]. Age can also influence measures of the N170, since this component is not fully developed until adolescence or adulthood [70,83,95].

P300/M300
The P/M300 is a later component observed around 300 ms post-stimulus onset and is thought to reflect higher order cognitive processing compared to the earlier components. The P300 is often broken into two subcomponents reflecting different cognitive processes, the P3a and P3b subcomponents [96,97]. The P3a is thought to indicate orienting to environmental changes and response to novelty that could underlie attentional switching [13,98], and is elicited by rare and task-irrelevant stimuli [13]. The P3b is seen when a participant is responding to a taskrelevant target stimulus [12,13] and as such, is thought to reflect task-related activity and memory updating [98][99][100]. When the component is referred to as simply the P3 or P300, this is usually the P3b element of the component [7] (as such, this chapter will refer to the P3b component as the P300). A task commonly used to elicit these components is the "oddball" task, in which infrequent ("rare") stimuli are presented among frequent ("standard") stimuli (see Figure 1). The P300 can be elicited by visual, auditory, or somatosensory stimuli [12].
A number of studies have found individuals with ASD to show reduced P300 amplitude compared to controls in response to a variety of auditory stimuli such as clicks, tones, and phonemes [12,21,27,29,31,37,[101][102][103]. This finding is less consistent for visual stimuli, with some studies finding diminished P300 amplitude [103,104] and others not finding this amplitude reduction [29,39,102,105]. Reductions in amplitude have been suggested to represent individuals with ASD having difficulties attaching significance to the target stimulus [31]. While most studies have found either reduced or equivalent P300 in individuals with ASD, Strandburg et al. [30] found adults with ASD to show increased P300 amplitude to both a continuous performance task (requiring a button press when the same visually-presented number appears on two consecutive trials) and an idiom recognition task (requiring participants to distinguish meaningful idiomatic phrases from nonsensical phrases). However, interpretation of this finding is difficult because while the ASD group performed as well as the control group on the continuous performance task, they performed worse than controls on the idiom recognition task. Given this difference in behavioral performance across the two tasks, and that the two tasks target very different skills, it may be that the amplitude increase in the two different tasks reflects separate cognitive processes. However, given that these tasks are more difficult than most commonly used oddball tasks, the group effect seen here may be a result of the challenging nature of the tasks. As with other components, differences between studies could be due to multiple factors, such as intellectual abilities of participants, matching of control participants to ASD participants, age range included, sensory modality of the stimuli, and the task used to elicit the response. Most studies have not found group differences in P300 latency [21,22,37,[101][102][103], although latency delays in individuals with ASD have been seen in response to novel distractor stimuli in visual target-detection tasks [33,39].
Interestingly, Salmond et al. [106] found P300 amplitude to be reduced in low-functioning individuals with ASD, but not in high-functioning individuals with ASD. This suggests a relationship between P300 abnormalities and cognitive function. As mentioned, this could also explain discrepancies among other studies, as levels of cognitive abilities between studies are variable. However, reductions in P300 amplitude in ASD have also been seen despite normal behavioral task performance [21,29,37], suggesting that individuals with ASD may use compensatory cognitive processing strategies to achieve normal behavioral performance. Others have suggested that abnormalities in the P300 component seen in ASD are due to attentional difficulties, commonly seen in ASD [32][33][34]. Whitehouse & Bishop [107] found children with ASD to demonstrate reduced P300 amplitude compared to controls when not attending to stimuli, but found that this reduction was attenuated when children were asked to attend to stimuli. This suggests that attention may help to normalize the P300 response in children, suggesting that the deficit may be in automatic processing. It has also been suggested that P300 abnormalities in ASD may be modality-specific, due to abnormalities being seen most consistently to auditory stimuli [30]. It has been suggested that deficits in auditory processing in ASD contribute to impairments in such things as language impairment [69,98]. Since these P300 abnormalities have also been seen in response to visual stimuli, albeit inconsistently, it has also been hypothesized that visual processing deficits may lead to other social impairments in ASD, such as reading emotions and face processing [98].

N400/M400
The N/M400 component peak is evident around 400 ms post-stimulus and has been shown to reflect semantic aspects of word and sentence processing [108][109][110][111]. As with the P/M300, this component reflects higher order cognitive processing compared to earlier components (e.g., N/M100, MMN/MMF). N/M400 amplitude increases when a stimulus deviates from context, as when a word is not predicted by the surrounding context [13,[111][112][113]. The larger N/M400 amplitude in response to unexpected stimuli compared to expected stimuli is referred to as an N/M400 effect, or a priming effect. Priming refers to a change in the response to a stimulus as a result of a related preceding stimulus, which is thought to reflect automatic processes involved in such things as memory, category recognition, and language [114,115]. In semantic priming, responses to words preceded by semantically related words will be enhanced compared to those preceded by unrelated words or nonsense words [112,114]. Similarly, if a word is incongruent with the preceding sentence (e.g., the boy climbed to the top of the "banana"), it becomes more difficult to integrate the word into the context of the sentence, leading to an increase in N/M400 amplitude [110,116,117]. As such, the N/M400 can be used to measure things such as semantic integration and degree of expectancy, which can be used as an indication of language processing [28,116].
Given that it is a measure of semantic processing, the N/M400 response is of interest in ASD because language impairment is a core symptom of ASD [50]. Many studies have found attenuated semantic priming effects in individuals with ASD compared to controls [28,113,[118][119][120].
Dunn et al. [28,118] showed a classic N400 effect in typically developing control children, in which the N400 was increased in response to words not matching the given category (e.g., animals) compared to the N400 response to words matching the stated category. Children with ASD did not demonstrate this N400 effect; that is, they did not show a difference in N400 amplitude between words that were congruent and incongruent with the category [28,118]. Fishman et al. [113] found similar results using a more typical N400 paradigm, in which responses to sentences ending with congruent/expected words (e.g., kids learn to read and write in "school") were compared with those to sentences ending in incongruent/unexpected words (e.g., kids learn to read and write in "finger"). In this study, individuals with ASD also demonstrated a lack of an N400 effect; that is, no difference in N400 responses was seen between congruent and incongruent words, although this effect was seen in typically developing controls [113]. Pijnacker et al. [117] also found a lack of the N400 effect in adults with ASD in a sentence context task. McCleery et al. similarly found the N400 effect to be missing in children with ASD during a verbal task (auditory words that matched or did not match a presented picture), but found the N400 effect to be no different from that seen in typically developing children during an environmental sound paradigm involving non-linguistic stimuli (environmental sounds that matched or did not match a picture, e.g., picture of a car and either the sound of a car engine starting (match) or a ball bouncing (mismatch) presented with the picture). This suggests that the deficit may be specific to verbal rather than non-verbal stimuli.
This absence of a priming effect in ASD is suggested to reflect deficits in semantic processing [28,113,118,119]. These deficits could be at multiple points in processing, including attention to stimuli, discriminating between stimuli, and integration of context [13,110,116,117]. It has been suggested that the lack of the N400 effect may indicate that individuals with ASD may fail to use context, or may use it inefficiently [28,117]. This is supported by other studies suggesting that individuals with ASD have difficulties using contextual information in language tasks [121,122]. Alternately, it could indicate a lack of semantic representation in individuals with ASD or reduced activation and/or connectivity of brain circuits responsible for semantic processing [28,113,119].
It has also been suggested that the reduced or absent N/M400 seen in ASD may be reflecting general verbal or cognitive abilities. Indeed, some studies have also seen impaired behavioral performance in individuals with ASD in addition to a lack of the N400 effect [28,113]. However, in these studies, ASD groups were not matched to controls on cognitive measures and/or showed reduced cognitive abilities compared to controls. Pijnacker et al. [117] matched control participants to ASD participants on intelligence scores and found individuals with ASD to perform comparably to controls behaviorally, but still found them to lack the N400 effect. Similarly, other studies have also found intact behavioral performance in individuals with ASD despite a reduced or missing N400 effect [118,119]. This suggests that the lack of an N400 effect is not simply due to a lack of semantic comprehension or task understanding. Rather, the abnormal N400 effect may reflect a lack of automatic semantic processing in individuals with ASD [117,119]. As such, individuals with ASD seem to be able to effectively use context to arrive at a normal behavioral response, but may require additional effort or more purposeful processing compared to typically developing controls [117]. Findings from McCleery et al. [119] suggest that this may be specific to certain types of processing. That the reduced N400 effect was not seen in response to nonverbal relationships (i.e., relationships between environmental sounds and pictures) indicates that this may reflect a less effortful form of processing for individuals with ASD. Therefore, while individuals with ASD may have intact automatic processing for some types of stimuli (e.g., nonverbal), automatic processing of semantic information may be specifically impaired. The need for more effortful processing of semantic information that may be more automatic for typically developing children could contribute to the development of additional language and social impairments [119].
While many studies have found the N400 effect to be reduced or absent in ASD [28,113,118,119], other studies have demonstrated an intact effect. Braeutigam et al. [123] found an M400 effect in adults with ASD in response to incongruent compared to congruent sentence endings, using MEG. Although both groups (adults with ASD and a group of control adults) showed an increased M400 response to incongruent compared to congruent sentence endings, there were differences in lateralization. While the control group showed a bilaterally increased response to incongruent endings, the ASD group showed a stronger right hemisphere response. Braeutgiam et al. suggest that this could be due to those in the ASD group having weaker expectancy regarding the sentence endings compared to the control group, as the left hemisphere is thought to be biased towards an expected meaning [123].

Resting-state electrophysiology in ASD
One of the most prominent features of resting EEG/MEG in autism is the presence of epileptiform activity. Up to 30 percent of individuals on the spectrum have comorbid epilepsy. These features are beyond the purpose of this review, but are considered in detail in several excellent reviews on the topic [1][2][3][4]. Here, we concern ourselves with non-epileptiform activity in the spontaneous EEG and MEG record.
Several studies have examined oscillatory activity in the EEG or MEG in ASD. In an early study, Cantor et al. [124] reported increased delta-band activity (0-4 Hz) in children with ASD compared to mental age-matched control subjects. Frontal abnormalities in slow wave activity have also been suggested by findings of increased frontal theta [125] as well as additional studies replicating increased delta in ASD [126,127]. There have, however, been opposing reports of reduced delta activity in children with ASD [e.g., see 128].
The alpha rhythm, which is most commonly associated with the posterior-dominant alpha that is reactive to opening and closing the eyes, but which is also observed across multiple regions of cortex reactive to other task-related activities (see below), has also been examined in ASD. In a high density EEG study of resting power and coherence, Murias et al. [127] reported reduced relative alpha power in frontal and occipitoparietal regions in adults with ASD. Increased alpha power was also seen in a recent study, also in an adult ASD sample [129]. Notably, however, in the later study, alpha power was only increased relative to controls in the eyes-open condition and not the eyes-closed condition, whereas the earlier Murias et al. study observed it in the eyes-closed state and did not record eyes-open. A recent MEG paper on eyes-closed resting-state activity in a large sample of children also found elevated alpha in children with ASD, and that elevated alpha power in temporal and parietal regions was positively correlated with scores on the SRS [130]. Although alpha was previously considered to be primarily a cortical idling rhythm [e.g., see 131], more recent evidence suggests that it is associated with top-down, active inhibition of sensory processing in various regions of the cortex in which it is expressed [132].
Spontaneous activity in higher frequency ranges (beta, gamma and higher) has been studied less frequently in ASD. Excessive spontaneous beta-band activity has been reported along midline EEG electrode sites [128] and temporal, parietal and occipital regions in MEG source analyses [130]. Orekhova et al. [133] reported higher levels of EEG gamma-band activity, defined as 24-44 Hz in their study, in two independent samples of boys with ASD (N = 20 each), also reporting that gamma-band activity was correlated with an IQ-derived measure of developmental delay in both samples. No differences were reported in the beta-band. Increased spontaneous gamma has also been reported in independent EEG [134] and MEG studies [130]. Higher frequency oscillatory activity is associated with recurrent inhibitory interneuronal processing in the cerebral cortex [e.g., see 135], which when considered in conjunction with the alpha-band findings, suggests significant and widespread inhibitory dysfunction in the ASD brain.
Beyond simple analyses of spectral power, resting state studies have also examined network connectivity through measures such as coherence. The previously discussed study by Murias et al. [127] also found reduced alpha-band coherence between frontal electrode sites and between frontal and other regions of the brain. Reduced coherence is a common finding in ASD studies [128,136,137]. A recent large study involving a retrospective database analysis of 463 cases of ASD and 571 controls found generally reduced coherence across a wide EEG bandwidth (although there were some frequencies that had increased coherence in ASD), and based on spatial patterns was able to achieve an average classification success across agegroups of 88.5% for controls and 86% for ASD subjects.

ERSP in ASD
As suggested in section 4, spontaneous oscillatory rhythms have established associations with mental states and are reactive to sensory and motor perturbance. In the following section, this reactivity, the ERSP, is considered for ASD.

Mu and beta rhythms and mirror neuron dysfunction
One of the most well studied ERSP phenomena in ASD is the mu rhythm ERD, more commonly referred to as mu suppression. The mu rhythm is actually an alpha-band (8-12 Hz, sometimes reported as 8-13 Hz) oscillation, but unlike the more widely known posterior-dominant alpha rhythm, peaks over central sensorimotor regions of the cortex. Intracranial recordings and source localization techniques suggest sources for the mu rhythm in primary sensory and motor and supplementary motor cortices [138,139]. The mu rhythm is strongly suppressed preceding movements and maximal suppression occurs contralateral to the movement side [140,141].
Interest in mu suppression in ASD is focused on the observation that the mu ERD response also occurs when observing the actions of others, not just during the actual motor movements of the subject [142][143][144][145]. This has led to a number of studies of mu suppression in the context of mirror neuron theory and the broken mirror hypothesis of ASD [e.g., 146,147,148].
EEG findings have been mixed on mu suppression in ASD during action observation. Oberman et al. [146] were first to report that mu suppression was normal to self-performed hand movements, but not to observed movements, in a sample of 10 high-functioning individuals with ASD. A subsequent study by other investigators also reported reduced mu suppression during action observation in 17 adults with ASD [149]. Oberman and colleagues [147] have also presented evidence that mu suppression effects in ASD may depend partly on the familiarity of the observed hand doing the action, as ASD participants' mu suppression only differed from healthy controls when the action was performed by an unfamiliar hand. Two recent EEG studies, however, have reported negative findings between healthy subjects and ASD subjects for the action observation condition. Raymaekers et al. [148] reported equivalent mu suppression action observation in 20 high-functioning children with ASD. Another study found that while ASD participants' imitative actions were impaired behaviorally compared to healthy controls, mu suppression was not impaired to action observation in the same group of 20 individuals [150].
A second sensorimotor rhythm, the beta rhythm that peaks around 20 Hz, has also received attention relating to the broken mirror theory of ASD. The sensorimotor beta rhythm is potentially linked to the mu rhythm as a possible harmonic, but this point is debated because of separate functional localization and timing related to motor activity [e.g., see 151]. This central beta rhythm is suppressed prior to movement onset (ERD) and there is a post-movement power increase (ERS) termed the post-movement beta rebound (PMBR: see Figure 2). Like the mu rhythm, the PMBR has also been shown to be reactive to action observation in addition to actual movement [152]. An early MEG study reported a negative finding with respect to beta activity during action observation, but the lack of finding could have been due to the very small sample (N=5) of ASD participants, or because the focus of the analysis was on the phase-locked evoked beta signal, rather than the induced beta activity that predominates the PMBR [153]. A more recent MEG study, also with a relatively small sample (N=7 ASD subjects) found that the PMBR effect was reduced in the ASD group relative to healthy controls during action observation [154]. Taken together, the EEG and MEG findings on mu and beta rhythm reactivity to action observation appear to suggest that motor regions of the neocortex in ASD may not respond as well to passive observation of another individual's actions, consistent with the prediction of the broken mirror theory in ASD.

Gamma-band
As with the mu and beta rhythms, interest in the gamma-band has also been keen in ASD. Gamma-band is typically defined as activity in the frequency range between 30 to 80 Hz, although some distinguish between a low (30-80 Hz) and high (> 80 Hz) gamma. This frequency band has generated attention in ASD research primarily because there may be a role for it in cognitive phenomena such as perceptual binding [155][156][157]. The mechanisms of gamma frequency generation in the cortex and hippocampus are also relatively well characterized [135,[158][159][160], representing the interaction of pyramidal glutamatergic inputs to fast-spiking interneurons, which in turn recurrently inhibit the pyramidal cells via GABAergic inputs. It is noteworthy that reduced numbers of one of the major classes of these interneurons has been a common finding across animal models of ASD [161].
Grice et al. [82] first examined EEG ERSP changes in gamma-band activity in a face discrimination task. Significant increases in gamma-band power were observed to upright faces than to inverted faces in healthy subjects, but no modulation of gamma power was seen to inverted versus upright faces in the ASD group. A recent MEG paper reported reduced gamma-band phase locking to Mooney face stimuli in ASD, but group differences were only seen when faces were upright, rather than scrambled and inverted [162]. Brown et al. [163] reported increased EEG induced gamma-band power produced in response to perceptual closure of Kanizsa illusory shapes in adolescents with ASD, in contrast to healthy control children who exhibited reduced gamma-band power to the same stimuli. Stroganova et al. [164] also used illusory contour Kanizsa stimuli to demonstrate an absence of early, phase-locked gamma-band responses in occipital electrodes in boys with ASD. A study of visual responses to Gabor patches by reported that children with ASD had attenuated increases in stimulus-induced gamma-band power to increasing spatial frequencies [165], which together with behavioral data was interpreted as evidence for enhanced low-level visual perceptual ability in ASD.
Auditory stimuli produce two types of gamma-band responses; an early, obligatory transient gamma-band response (tGBR) is seen in typically developing individuals to all types of sound stimuli within the first 30-80 ms post-stimulus [166]. When stimuli are modulated in amplitude, either as part of a train of clicks or by amplitude modulation, a later auditory steady-state response (ASSR, see Figure 2) beginning around 100 ms is produced at or near the frequency of modulation, peaking around 40 Hz modulatory rates [167,168]. Both types of responses are highly phase-locked in typically developing individuals. Wilson et al. [169] reported reduced ASSR power in children and adolescents with ASD. Reduced tGBR power, and reduced phaselocking, has also been reported in both children [42] and adults [170] with ASD using MEG. Both tGBR and ASSR responses are also reported to be impaired in unaffected first-degree relatives of persons with ASD, suggesting that the finding may be a heritable biomarker, or endophenotype [170,171]. There is an association between visual gamma-band activity and the concentration of GABA in the cerebral cortex [172][173][174], which as previously discussed may implicate inhibitory dysfunction in the disorder.

Future directions
Based on the rich findings emerging in the electrophysiology of autism, there has been an exciting push to expand the field beyond electromagnetic phenomenology and into clinical translation. Two related areas of significant interest have emerged. First, research-ers have made significant progress towards successful EEG and MEG recordings in young children and infants so that studies of risk and diagnostic outcome can be conducted. For example, EEG studies examining infants at familial high risk (i.e., a sibling with autism) have shown some promise using measures of EEG complexity and machine learning to separate high-risk infants from typically-developing infants [175,176]. Of course, a necessary and more difficult next step will be predicting the diagnostic outcome of these infants, since the majority will end up unaffected, or at least below the threshold for clinical diagnosis. A second area of positive development in the field concerns how many of the findings have converged on abnormalities in inhibitory function in relatively well-known circuitry [e.g., see 42], which suggests the possibility of using electrophysiology in clinical trials in terms of prediction of treatment response or even as a primary outcome measure related directly to pharmaceutical intervention.