Open access peer-reviewed chapter

Speech-Evoked Brainstem Response

Written By

Milaine Dominici Sanfins, Piotr H. Skarzynski and Maria Francisca Colella-Santos

Submitted: 31 May 2016 Reviewed: 06 October 2016 Published: 29 March 2017

DOI: 10.5772/66206

From the Edited Volume

Advances in Clinical Audiology

Edited by Stavros Hatzopoulos

Chapter metrics overview

2,387 Chapter Downloads

View Full Metrics

Abstract

The auditory brainstem response (ABR) is a clinical tool to assess the neural functionality of the auditory brainstem. The use of verbal stimuli in ABR protocols has provided important information of how the speech stimuli are processed by the brainstem structure. The perception of speech sounds seems to begin in the brainstem, which has an important role in the reading process and the phonological acquisition speech ABR assessment allows the identification of fine-grained auditory processing deficits, which do not appear in click evoked ABR responses. The syllable /da/ is commonly used by speech ABR assessment due to it being considered a universal syllable and allows it to be applied in different countries with good clinical assertiveness. The speech ABR is a objective, fast procedure that can be applied to very young subjects. It be utilized in different languages and can provide differential diagnoses of diseases with similar symptoms, as an effective biomarker of auditory processing disorders present in various diseases, such as dyslexia, specific language impairment, hearing loss, auditory processing disorders, otitis media, and scholastic difficulties. Speech ABR protocols can assist in the detection, treatment, and monitoring of various types of hearing impairments.

Keywords

  • speech
  • speech perception
  • electrophysiology
  • frequency-following response
  • speech-ABR

1. Introduction

The auditory processing information can be analyzed by an assessment of the auditory evoked potentials (AEP). Among the different types of AEPs, there is the auditory brainstem response (ABR) The ABR is a clinical tool to assess the neural functionality of the auditory brainstem [1]. Until recently, assessment using clinical ABR protocols was carried out only with nonverbal stimuli, such as clicks, tone-bursts, and chirps. The ABR responses (i) permit the analysis of the integrity of the auditory pathways and (ii) can establish electrophysiological thresholds in order to identify basic neural abnormalities and to evaluate patients who did not provide reliable responses in the standard behavioral audiological assessment [2].

Although the use of the click-evoked ABR has been widely used clinically, it is still necessary to unravel how verbal sounds are coded in the brainstem. Recent technological advances have enabled the inclusion of verbal stimuli in the ABR commercial equipment. The use of verbal stimuli in ABR protocols provided important information of how the speech stimuli are processed by the brainstem structure, which actively participates in the analysis of the complex verbal stimuli [3].

The verbal stimulus most widely used in speech ABR is a syllable composed of a consonant and a vowel (CV) [4]. The consonant perception is performed by the distinction between vocal production times and sound of consonant that guarantees the intelligibility in the process of human communication and the proper development of language.

The perception of speech sounds seems to begin in the brainstem, which has an important role in reading process and phonological acquisition [57]. An effective and objective way to investigate this process will be the assessment of speech ABR that allows the identification of fine-grained auditory processing deficits associated with real-world communication, skills which do not appear in click responses, and it also can be used for early identification of auditory processing impairments in very young children [8]. Above all, speech ABR can be used as an objective measure of the hearing function. One of the great advantages of this method is that it is not influenced by environmental issues, which can disrupt the behavioral assessments [2]. Even the best behavioral tests can confound the subject by factors such as attention, motivation, alertness/fatigue, and by co-occurring disorders, such as language impairments, learning impairments, or attention deficits [9].

Understanding the neural processing of speech sounds at the brainstem level provides knowledge regarding the central auditory processes involved in normal hearing subjects and also in clinical populations [10]. Moreover, altered responses of speech ABR may be associated with impaired speech perception in noise. These changes can cause a negative impact on communication and serious consequences for academic success [8]. According to Sinha and Basavaraj [11], the major application of speech ABR can be in diagnosing and categorizing children with learning disability in different subgroups, assessing the effects of aging on central auditory processing of speech, assessing the effects of central auditory deficits in hearing aid, and cochlear implant users.

Advertisement

2. Assessment

Speech ABR has an important feature, that is, the specific aspects of acoustic signal are preserved and reflects the neural coding in figure [representation of 40 ms of syllable /da/ (gray) stimulus and responses (black)] [4]. Furthermore, this assessment permits to understand the neural basis of the auditory system, even if it is normal or deficient stimulus and responses (black)] [4] (Figure 1).

Figure 1.

Representation of 40 ms of syllable /da/ (gray) stimulus and responses (black) [4].

The verbal stimulus used in the speech ABR assessment, normally, is the syllables /ba/, /da/, or /ga/. The verbal assessment provides information about how the speech syllable is encoded by auditory system. The trace of the speech ABR response can be dismembered in two parts: the onset and the frequency following response (FFR). The first part represents the consonant and the second part the vowel [10].

The best-known model used is elicited with the synthesized syllable /da/ provided by a computer software. The use of synthesized speech allows acoustic parameters to be controlled and constant and ensures the quality of the stimulus that will be presented to the listener and/or the patient [12]. This stimulus modality was developed by the group of Dr. Nina Kraus at Northwestern University. The stimulus consisted of the consonant /d/ (transient portion—onset) and the short vowel /a/ (sustained portion—following frequency response). When elicited by the stimulus /da/, the subcortical response emerges as a waveform of seven peaks—V, A, C, D, E, F, and O—wherein the single wave with a positive peak is the complex of wave V. Waves V and A reflect the onset response, wave C the transition area, waves D, E, and F the periodic area (the frequency following response), and wave O the offset response (Figure 2) [4, 13, 14]. A typical response is shown in Figure 2 (electrophysiological response representation of synthesized syllable /da/. Investigator’s personal data based on the assessment of a normal hearing, performed with the BioMARKTM software) [13].

Figure 2.

Electrophysiological response representation of synthesized syllable /da/. Investigator’s personal data based on the assessment of a normal hearing, performed with the BioMARKTM software [13].

It is important to describe that the onset component seems to be elicited around 10 ms and is considered the transient portion of sound stimulus reflecting the decoding of fast temporal changes inherent in the consonant [15]. The component FFR is called sustained portion and seems to be elicited around 18–50 ms. This component reflects the encoding of periodic and harmonic structure of vowel sound related to harmonic vowels [11] and is also related to encoding of the elements of fundamental frequency and its modulations (first and second formants) [4, 15].

Another feature of speech ABR responses is that there is no variation between intra and inter subject, maintaining stable the morphological characteristics [16, 17].

The speech sounds are present more frequently in the daily lives of every human being. A long-term of auditory experience can improve the performance of the whole auditory system. Therefore, a subject who has a good processing of speech sounds has better electrophysiological responses for this type of stimulus, showing that auditory experience might modify the basic sensorial coding of the whole auditory pathway [1821]. On the other hand, a subject who has auditory deprivation may have significant electrophysiological changes in the auditory system, as can be seen in children with history of otitis media.

Advertisement

3. Parameters

There are several searches about the coding processing of verbal sound occurs and to insert speech ABR as part of clinical routine.

The syllable /da/ is commonly used speech for ABR assessment due to it being considered a universal syllable and allows it to be applied in different countries with good clinical assertiveness [4]. However, previous studies show that there is difference response in subjects from different culture [22] since each language has its own characteristics and peculiarities that can contribute or not to a suitable processing of speech sounds.

The majority of the studies was performed with native English speakers, which is explained by the fact that Dr. Kraus, the leading researcher and creator of the speech stimulus, did her work at Northwestern University, USA [1, 4]. However, additional studies have been initiated in numerous languages such as Arabic, Brazilian Portuguese, French, Greek, Hebrew, Indian, Japanese, and Persian [1, 11, 13, 2232].

In each laboratory and/or institution, researchers choose their own parameters that will be applied on clinical investigation. Below are some items that should be thought about at the time of creation of the assessment parameters.

3.1. Equipment and software

Sanfins and Colella-Santos analyzed which equipments and software were often used for assessment of speech ABR. Biologic Navigator Pro (Natus) is the most used equipment followed by Neuroscan equipment (Biolink). As regards the software, the BioMark (Biological Marker of Auditory Processing) and Neuroscam Stim 2 are the main packages available [1].

3.2. Electrode montage

The position of the electrodes follows the traditional ABR assessment (click ABR). Neurophysiological responses can be recorded with an active electrode positioned on the vertex (Cz), the reference electrodes on the ipsilateral mastoid, and the ground on the contralateral mastoid, using one channel with surface electrodes fixed, according to the 10–20 system [33]. Automatic switching function of the reference signals and the amplifier ground based on the stimulated ear should be activated on the equipment. The electrode on the left ear can be connected to input 2/channel 1, and the electrode on the right ear can be connected to ground connection cable. During the recording session, impedance should be maintained at below 5 kΩ and inter-electrode impedance below 3 kΩ [22].

3.3. Stimulated ear

Research shows that there is an asymmetry for the auditory processing of verbal sounds that occur in the brainstem and extend to auditory cortex when evaluating the differences between the responses obtained from the presentation of acoustic stimuli on the right and left ears [34, 35].

Regarding the stimulated ear, the great majority of studies performed the assessment of ABR with speech stimuli elicited only on the right ear, which can be explained by the advantage of right ear in encoding speech by contralateral projection to the left hemisphere [24, 26, 29, 31, 32, 3644].

However, some researchers have written that stimulus presentation can be performed on the ear with better threshold confirmed by pure tone audiometry [45]. In a systematic review about the applicability of speech ABR [1], it was possible to see that in 14.3% of articles, stimulation was performed monaurally; however, between the left ear and right ear stimulation, there is scientific evidence that even if there is a proven right ear advantage in the processing of speech, the left ear can participate in this process, but with less intense electrophysiological responses [28, 36, 46]. Therefore, an analysis of responses from both ears could help in the diagnosis process as well as therapeutic monitoring.

Importantly, there is a tutorial about ABR of complex sounds that notify that the monaural stimulation is preferred for children, while the binaural stimulation is more realistic than monoaural [4].

Ahadi et al. [25] presented the sound stimulus on three conditions: monaural right, monaural, and binaural left. They showed that the magnitude and strength of speech ABR responses depend on the stimulus presentation mode, and the binaural presentation of speech syllable enables better visualization of the response, however,

3.4. Stimulus

The speech ABR assessment allows to apply different types of sound stimulus. The syllable /da/ is most well known and applied more often in studies [11, 13, 15, 22, 25, 28, 29, 32, 36, 39, 45, 47]. However, there are researchers who used disyllables as /baba/ [27] or even other syllables composed by consonant-vowel as /ba/ [23, 30, 31].

The presentation rate parameter is related to the duration of the sound stimulus; in the case of speech ABR, it is related to the size of the sound stimulus speech. The frequent value found in the studies is 10.9/s, however, no reports of the use of 11.1/s. In a study of literature review, it is noted that in about 19% of the previous studies on the assessment of speech ABR, this parameter is not described by the researchers [1].

Considering the length parameter, it is observed that the most frequently found values were 40 and 170 ms [1]. There is a relation between the presentation rate and duration, so the higher, the shorter will be the presentation rate [45, 48]. Song et al. [16] used both acoustic stimuli and concluded that short (40 ms) and long stimulus (170 ms) reflect the coding of speech in the brainstem in a reliable way, thus enabling that neural changes can be monitored through an objective electrophysiological measure.

The type of polarity of the sound stimulus is one of the most consistent parameters across studies on the assessment of speech ABR. Approximately, 90.5% of the previous studies have used alternating polarity [13, 22, 23, 28, 29, 39, 45, 47, 49, 50]. The choice for this type of polarity should be the reduction of artifacts and cochlear microphonic [51].

Regarding the intensity used in the assessment of speech, ABR suggests the use of 60–85 dB SPL [4, 15]. It is noted that, as it is an assessment process, the sound should be applied in an audible and comfortable intensity to the patient. The majority of studies has used the intensity of 80 dB SPL [1].

The speech stimulus requires approximately 4000 and 6000 sweeps in order to get a robust and replicable response, differently, the click stimulus or tone burst that needs around 2000 sweeps to get a good quality of response [4]. The number of sweeps is one of the most diverse parameters across studies [1]; however, the majority of researches used two blocks of 3000 free sweeps artifacts [13, 22, 25, 28, 36, 3941, 47, 49]. Both trials were averaged to create a calculated wave of 6000 sweeps. The traces of both recordings were added, and the responses of the resultant waves were identified and analyzed in Figure 3 (electrophysiological response representation of two blocks of 3000 sweeps and calculated wave of 6000 sweeps. Investigator’s personal data based on a subject’s assessment performed with BioMARKTM software).

Figure 3.

Electrophysiological response representation of two blocks of 3000 sweeps and calculated wave of 6000 sweeps. Investigator’s personal data based on a subject’s assessment performed with BioMARKTM software.

3.5. Transducer

The literature recommends that earphones are not to be used once this device can increase the chances of artifacts. Thus, the recommendation is to use the insert earphones. In cases of insert earphones are not possible to be used, there is the possibility to do the test with loudspeakers. It is important to consider that the responses are not so reliable as ones with insert earphones. The evaluator should be very careful in positioning the patient and the loudspeakers, and these loudspeakers should be equidistant between the right and left ears [4]. In addition, previous study has presented speech stimulus through individual hearing aid with excellent results with free and high-quality artifact [23].

3.6. Assessment of condition

As the traditional ABR assessment, the patients are instructed to keep their bodies relaxed with no movements in order to minimize the myogenic artifacts. [24]

Researchers reinforce that the attention can influence the FFR portion of speech sounds [52]. Therefore, the majority of researches has allowed the patient to watch a movie with reduced sound intensity or with subtitle [16, 23, 40, 41, 50], which seems that it keeps them quiet and relaxed during the assessment. Other researchers allow the patient to choose between watching a movie or sleep during the assessment process [24, 45].

Different parameters are being used. The parameters most cited in the literature about the assessment of speech ABR and with good clinical results are presented below in Chart 1 (Speech ABR parameters). Note that there is a well-written tutorial by Skoe and Kraus [4] with detailed, clear, and objective information about the functioning and clinical application of speech ABR. This tutorial can be a material support to those interested in unraveling this new and effective electrophysiological assessment method.

Parameters Setting
Equipment Biologic navigator pro
Software BioMARK
Electrode montage Cz, M1, and M2
Stimulated ear Right ear
Stimulus Speech
Stimulus type Syllable /da/
Stimulus duration 40 ms
Stimulus polarity Alternating
Stimulus intensity 80 dB SPL
Stimulus rate 10.9/sec
Number of sweeps 6000
Replicability Twice for 3000 sweeps
Transducer Insert
Assessment condition Watch a movie

Chart 1.

Speech ABR parameters.

Advertisement

4. Criteria of normality

Before presenting the criteria of normality, it is important to understand the influence of the maturational process and gender in response to speech ABR.

4.1. Maturation

The response of ABR with nonverbal stimulus is mature around 18 months, while the speech ABR appears to be mature by the age of 5 [10]. This way a procedure can be used in young and school-age children, helping in the differential diagnosis of diseases with similar symptoms [14]. Further studies are being conducted to regulate the normal values for different age range and confirm the age of maturation of central auditory system for verbal sounds.

According to Yamamuro et al. [39], age affects the coding of sounds by a single stimulus or the complex and neural timing and auditory skills are improved over the years. The responses of speech ABR in a child of 5 years are not so different from a child’s responses in the age group of 8–12 years , whereas a child’s responses in the age group of 3–4 years are very different in morphological aspect as related to the latency time.

Source Latency (ms) Amplitude (μv) VA measures
Test Retest Test Retest Test Retest
Mean SD Mean SD Mean SD Mean SD
V 6.65 0.27 6.68 0.27 0.13 0.05 0.13 0.04
A 7.62 0.35 7.62 0.37 0.20 0.06 0.21 0.06
C 18.60 0.68 18.47 0.68 0.03 0.06 0.03 0.05
D 22.67 0.59 22.67 0.58 0.13 0.07 0.14 0.07
E 31.12 0.53 31.2 0.57 0.22 0.06 0.21 0.07
F 39.70 0.57 39.71 0.50 0.14 0.09 0.13 0.08
O 48.26 0.43 48.34 0.39 0.15 0.06 0.16 0.06
Slope VA (μv/ms) 0.35 0.11 0.37 0.12
Area VA (μv × ms) 0.16 0.05 0.15 0.05

Table 1.

Parametric study (mean and standard deviation) by syllable /da/, 40 ms, (silence) performed in adults with normal hearing (Song et al. [16]) on the right ear in two different conditions (test and retest).

Song et al. [16] performed their study with 45 adults with normal hearing (29 females) (19–36 years old; 24.5 ± 3.0).


Note: Parametric study in normal adults.


4.2. Gender influence

Previous studies of literature have shown that there are differences of responses in the auditory perception between genders with better performance in female in the entire trajectory of the peripheral auditory and central nervous system [53, 54]; however, when the focus of analysis is the speech ABR, it was observed that women have better responses (higher values for amplitudes and lower values for latencies), and it was only the initial portion of the speech stimuli of the coding process when compared to men [55]. Differences in speech ABR responses between genders were explained by the premise that the synapses of the afferent and efferent systems of the auditory system are strongly influenced by the hormone estrogen activity [56].

4.3. Normative data

There are some studies that are used as parametric models for the analysis of speech ABR. Normative data for young adults (19–36 years old) with normal hearing and analysis of all the waves are presented in Table 1 (parametric study [mean and standard deviation] by syllable /da/, 40 ms, [silence] performed in adults with normal hearing—Song et al. [16]—on the right ear in two different conditions—test and retest [16]). Two studies for children and adolescents will be presented: (i) composed of children between 8 and 12 years of age with normal hearing and with the analysis of waves V, A, C, and F and VA complex in Table 2 (parametric study [mean and standard deviation] by syllable /da/, 40 ms, [silence] performed in children with normal hearing—Russo et al. [15]—on the right ear [15]) and (ii) composed of children and adolescents between 8 and 16 years of age with normal hearing and examination of all the waves in Table 3 (parametric study [mean and standard deviation] by syllable /da/, 40 ms [silence] performed in children and adolescent with normal hearing—Sanfins et al. [22]—on the right and left ears [22]).

Source Latência (ms) Amplitude (μv) VA measures
Right ear Right ear Right ear
Mean SD Mean SD Mean SD
V 6.61 0.25 0.31 0.15
A 7.51 0.34 0.65 0.19
C 17.69 0.48 0.36 0.09
F 39.73 0.61 0.43 0.19
Slope VA (μv/ms) 0.13 0.05
Area VA (μv × ms) 1.70 1.23

Table 2.

Parametric study (mean and standard deviation) by syllable /da/, 40 ms (silence) performed in children with normal hearing (Russo et al. [15]) on the right ear.

Russo et al. [15] studied 36 and 38 children and adolescent (17 females) with normal hearing (8–12 years old).


Note: Parametric study in normal children.


The majority of studies about speech ABR assessment was performed with monoaural stimulus on the right ear [13, 24, 29, 39, 49, 50]. The choice for the assessment only on the right ear is related to the advantage of the left hemisphere for processing of language sounds. Associated with this fact, earlier research has shown that there are no statistically significant differences between the responses on the right and left ears in subjects with normal hearing and typical development. However, there are many conditions to be studied through the speech ABR, and it is important to consider whether there are differences in responses between the ears.

Thereby, the responses on the right and left ears were presented in the population of children and adolescents with normal hearing and normal development so that it can be used as a comparison with the responses obtained in subjects with different pathologies.

It is noted that the parametric studies provide a direction to the researchers. It is fundamental to know the parameters of collection and analysis of each reference author before the use of this data. Each research center or clinic should carry out its own normative study for the different age groups.

Source Latency (ms) Amplitude (μv) VA measures
Right ear Left ear Right ear Left ear Right ear Left ear
Mean SD Mean SD Mean SD Mean SD
V 6.50 0.21 6.51 0.21 0.12 0.06 0.11 0.06
A 7.46 0.33 7.48 0.36 0.22 0.09 0.21 0.07
C 18.33 0.42 18.41 0.46 0.10 0.08 0.11 0.10
D 22.21 0.66 22.36 0.44 0.14 0.09 0.13 0.08
E 30.89 0.50 30.78 0.61 0.30 0.39 0.23 0.09
F 39.37 0.55 39.20 0.47 0.24 0.29 0.19 0.09
O 48.00 0.75 47.95 0.54 0.21 0.30 0.16 0.12
Slope VA (μv/ms) 0.37 0.14 0.35 0.13
Area VA (μv × ms) 0.33 0.13 0.31 0.13

Table 3.

Parametric study (mean and standard deviation) by syllable /da/, 40 ms (silence) performed in children and adolescent with normal hearing (Sanfins et al. [22]) on the right and left ears.

Sanfins et al. [22] studied 40 children and adolescent (25 females) with normal hearing (8–16 years old; 10.95 ± 2.0).


Note: Parametric study in normal children and adolescent.


Advertisement

5. Clinical applicability

5.1. Auditory training

Auditory training is able to induce neurophysiological changes that can be observed by an evaluation of speech ABR. According to Killion et al. [57], an auditory training program promotes gains in both speech perception in quiet environments such as in noisy environments and improves short-term memory skills and attentional processes.

According to Hayes et al. [58] children with learning problems can benefit from an auditory rehabilitation program through auditory training. Research has shown that these children have a delay in responses of speech ABR, more specifically, the values of onset portion—wave A, and the assessment of speech ABR may be able to ascertain whether the auditory training program was effective, monitoring the benefits of rehabilitation in children and in young adults [15, 16].

Further studies are needed in the elderly population to determine if this type of assessment can be effective in monitoring this population. Anderson et al. [49] reported that the elderly usually have a hearing loss, thus an auditory training program should be recommended along the selection and adaptation of hearing aid suitable for need each elderly.

Auditory training and amplification are ideal to improve the auditory function and, especially, to improve the process of speech perception. In this context, the assessment of speech ABR could have an important role to demonstrate quickly, clearly, and objectively what are the real gains of interventions. Researchers have emphasized that the assessment of speech ABR is considered a biological marker of auditory training, being able to identify subjects who will have the benefit of an auditory training program [58, 59].

5.2. The aging process

The elderly has a reduced neural synchrony in the encoding of speech sounds, especially when the speech sounds are produced in the presence of background noise. The assessment of speech ABR is able to monitor the difficulty in understanding speech in noise reported by the elderly. The fitting process allows speech sounds to be heard more clearly. Thus there has been a change of morphology and the latency values of the speech responses ABR [24, 36, 45].

5.3. Differential diagnosis

Research shows that the literacy process depends on an efficient functioning of the auditory processing in the brainstem. The assessment of speech ABR could accurately predict early and possible changes in the processes of reading, writing, and literacy in preschool children [41, 60, 61].

Children with learning, speech, and hearing impairments not only suffer from background noise and competitive sounds but also have some difficulty in the perception of speech sounds in quiet environments [62]. This difficulty can be arising from changes in temporal processing that can impact the perception of speech. In this context, the speech ABR is a biological marker of auditory processing disorder, being able to identify children with predisposition to these changes [4].

Children with dyslexia often have impairments in the perception of speech sounds that can affect their reading skills [63]. According Hornickel and Kraus [64] good readers have a stable neural representation of sound and that children who have inconsistent neural responses are likely at a disadvantage when learning to read. Thus, the speech ABR can help identify and separate these children, enabling a more appropriate intervention.

Besides that, another application of speech ABR can be in diagnosing and categorizing children with learning disability in different subgroups, assessing the effects of aging on central auditory processing of speech, and assessing the effects of central auditory deficits in hearing aid and cochlear implant users [11].

Understanding the neural processing of speech sounds at the brainstem level may provide knowledge regarding the central auditory processes involved in normal hearing subjects and also in clinical populations [10]. Moreover, altered responses of speech ABR may be associated with impaired speech perception in noise. These changes might have a negative impact on communication and have serious consequences for academic success [8].

5.4. Musician

Currently, there is an increasing interest in the influence of musical experience related to language processing. The intense musical training in the long term seems to cause an anatomical and physiological change and improves the working memory in cognitive processes, the control of emotions, and perception of sound stimuli [65].

The brain stem has an important role in the encoding of speech sound stimuli and temporal processing [66]. Temporal processing contributes to the perception of duration of the consonants and the identification of notes and musical scales [66, 67]. The literacy process, including the process of reading, writing, and language, is also influenced by the temporal processing [68]. The detection of small and rapid changes of the sound stimulus is associated with the rhythm, the frequency of the sound stimulus, phonemic discrimination, duration, and discrimination of pitch [69]. Understanding how music influences the encoding of speech sounds can be used for more information about the learning process [64]. One way to analyze this is through the responses of speech ABR.

5.5. History of otitis media

Otitis media is one of the most common childhood diseases, affecting about two-third of children in the first 5 years of life [70, 71]. This period is important for the development of oral and written language. Otitis media can cause functional sequelae of the middle ear structures and can induce a temporary mild-to-moderate hearing loss. The latter can remain for a few days or for several weeks [72, 73]. Concomitantly, the accumulation of fluid in the middle ear interferes the speech perception, causing a distortion in the perception of acoustic signals and reduces the speed and accuracy of verbal decoding [74]. When hearing fluctuation occurs early in life, that is the critical period for linguistic development, a limited acquisition of speech and language occurs. As a result communication problems may appear, such as language developing impairment, auditory processing deficits, cognitive impairment, and psychosocial development and impairment in the acquisition of literacy skills [75, 76].

Inadequate auditory stimulation in childhood can lead to long-term alterations of the auditory structures in the central auditory nervous system [73]. Research shows that children suffering from secretory otitis media in their first 6 years of age and underwent a surgery for bilateral ventilation tubes placement demonstrates neurophysiological modifications of speech perception when compared with typically developing children and adolescents.

Advertisement

6. Conclusion

The assessment of speech ABR could accurately predict early and possible changes in the processes of reading, writing, and literacy in preschool children.

The speech ABR is objective, fast, and can be applied from early childhood. It is equally effective in different languages and can provide differential diagnoses of diseases with similar symptoms, as an effective biomarker of auditory processing disorders that may be present in various diseases, such as dyslexia, specific language impairment, hearing loss, auditory processing disorders, otitis media, and scholastic difficulties.

It is a science with great possibility of research with different approaches to assist in detection, treatment, and monitoring of various diseases.

Advertisement

Acknowledgments

This work was supported by the Project “Integrated system of tools for diagnostics and telerehabilitation of sensory organs disorders (hearing, vision, speech, balance, taste, smell)” acr. INNOSENSE, co-financed by the National Centre for Research and Development (Poland), within the STRATEGMED program.

References

  1. 1. Sanfins M, Colella-Santos M. A review of the clinical applicability of speech-evoked auditory brainstem responses. Journal of Hearing Science. 2016;6 (1):9–16.
  2. 2. Sanfins MD. Auditory neuropathy/auditory dys-synchrony: a study with the hearing impaired students of three special schools in the city of São Paulo [dissertação]. São Paulo: University of São Paulo, Faculty of Medicine; 2004. doi:10.11606/D.5.2004.tde-19092014-101620
  3. 3. Blackburn CC, Sachs MB. The representation of steady-state vowl /eh/ in the discharge paterns of cat anteroventral cochlear nucleus neurons. Journal of Neurophysiology. 1990;63:1303–1329.
  4. 4. Skoe E, Kraus N. Auditory brainstem response to complex sounds: a tutorial. Ear and Hearing. 2010;31:320–324.
  5. 5. Dhar S, Abel R, Hornickel J, Nicol T, Skoe E, Zhao W, et al. Exploring the relationship between physiological measures of cochlear and brainstem function. Clinical Neurophysiology. 2009;120(5):959–966.
  6. 6. Basu M, Krishnan A, Weber-Fox C. Brainstem correlates of temporal auditory processing in children with specific language impairment. Developmental Science. 2010;13(1):77–91.
  7. 7. Hornickel J, Skoe E, Nicol T, Zecker S, Kraus N. Subcortical differentiation of stop consonants relates to reading and speech-in-noise perception. Proceedings of the National Academy of Sciences of the United States of America. 2009;106(31):13022–13027.
  8. 8. Kraus N, Hornickel J. cABR: a biological probe of auditory processing. In: Geffner DS, Ross-Swain D, editors. Auditory processing disorders: assessment, management, and treatment. 2nd ed. San Diego: Plural Publishing; 2013. pp. 159–183.
  9. 9. Baran JA. Test battery considerations. In: Plural, editor. Handbook of (central) auditory processing disorders. 1. San Diego: Plural; 2007. pp. 163–192.
  10. 10. Johnson KL, Nicol T, Zecker SG, Kraus N. Developmental plasticity in the human auditory brainstem. Journal of Neuroscience. 2008;28(15):4000–4007.
  11. 11. Sinha SK, Basavaraj V. Speech evoked auditory brainstem responses: A new tool to study brainstem encoding of speech sounds. Indian Journal of Otolaryngology and Head and Neck Surgery. 2010;62(4):395–399.
  12. 12. Kent R, Read C. Análise acústica da fala2015. 504 p.
  13. 13. Sanfins M, Borges L, Ubiali T, Colella-Santos M. Speech auditory brainstem response (speech ABR) in the differential diagnosis of scholastic difficulties. Brazilian Journal of Otorhinolaryngology. 2015 Nov 6. pii: S1808–8694(15)00187–1. doi: 10.1016/j.bjorl.2015.05.014
  14. 14. Johnson KL, Nicol TG, Kraus N. Brain stem response to speech: a biological marker of auditory processing. Ear and Hearing. 2005;26(5):424–434.
  15. 15. Russo N, Nicol T, Musacchia G, Kraus N. Brainstem responses to speech syllables. Clinical Neurophysiology. 2004;115:2021–2030.
  16. 16. Song JH, Nicol T, Kraus N. Test-retest reliability of the speech-evoked auditory brainstem response. Clinical Neurophysiology. 2011;122(2):346–355.
  17. 17. Banai K, Kraus N. The dynamic brainstem:implications for APD. In: McFarland and A. Cacace (eds). Current Controversies in Central Auditory Processing Disorder. Plural Publishing Inc: San Diego, CA. 2008. pp. 269–289.
  18. 18. Strait DL, Kraus N, Parbery-Clark A, Ashley R. Musical experience shapes top-down auditory mechanisms: evidence from masking and auditory attention performance. Hearing Research. 2010;261(1–2):22–29.
  19. 19. Strait DL, Slater J, O’Connell S, Kraus N. Music training relates to the development of neural mechanisms of selective auditory attention. Developmental Cognitive Neuroscience. 2015;12:94–104.
  20. 20. Musacchia G, Strait D, Kraus N. Relationships between behavior, brainstem and cortical encoding of seen and heard speech in musicians and non-musicians. Hearing Research. 2008;241(1–2):34–42.
  21. 21. Parbery-Clark A, Tierney A, Strait DL, Kraus N. Musicians have fine-tuned neural distinction of speech syllables. Neuroscience. 2012;219:111–119.
  22. 22. Sanfins MD, Borges LR, Ubiali T, Donadon C, Hein TAD, Hatzopoulos S, et al. Speech-evoked brainstem response in normal adolescent and children speakers of Brazilian Portuguese. International Journal of Pediatric Otorhinolaryngolgoy. 2016;90:12–19.
  23. 23. Bellier L, Veuillet E, Vesson JF, Bouchet P, Caclin A, Thai-Van H. Speech auditory brainstem response through hearing aid stimulation. Hearing Research. 2015;325:49–54.
  24. 24. Fujihira H, Shiraishi K. Correlations between word intelligibility under reverberation and speech auditory brainstem responses in elderly listeners. Clinical Neurophysiology. 2015;126(1):96–102.
  25. 25. Ahadi M, Pourbakht A, Jafari AH, Jalaie S. Effects of stimulus presentation mode and subcortical laterality in speech-evoked auditory brainstem responses. International Journal of Audiology. 2014;53(4):243–249.
  26. 26. Rocha-Muniz CN, Befi-Lopes DM, Schochat E. Sensitivity, specificity and efficiency of speech-evoked ABR. Hearing Research. 2014;317:15–22.
  27. 27. Wagner R, Torgesen J, Rashotte C. Development of reading-related phonological processing abilities: new evidence of bidirectional casuality from a latent variable longitudinal study. Development Psychology. 1994;30 (1):73–87.
  28. 28. Rana B, Barman A. Correlation between speech-evoked auditory brainstem responses and transient evoked otoacoustic emissions. Journal of Laryngology and Otology. 2011;125(9):911–916.
  29. 29. Karawani H, Banai K. Speech-evoked brainstem responses in Arabic and Hebrew speakers. International Journal of Audiology. 2010;49(11):844–849.
  30. 30. Akhoun I, Moulin A, Jeanvoine A, Menard M, Buret F, Vollaire C, et al. Speech auditory brainstem response (speech ABR) characteristics depending on recording conditions, and hearing status an experimental parametric study. Journal of Neuroscience Methods. 2008;175(2):196–205.
  31. 31. Akhoun I, Gallégo S, Moulin A, Ménard M, Veuillet E, Berger-Vachon C, et al. The temporal relationship between speech auditory brainstem responses and the acoustic pattern of the phoneme /ba/ in normal-hearing adults. Clinical Neurophysiology. 2008;119(4):922–933.
  32. 32. Friendly RH, Rendall D, Trainor LJ. Learning to differentiate individuals by their voices: infants’ individuation of native- and foreign-species voices. Developmental Psychobiology. 2014;56(2):228–237.
  33. 33. Jasper H. The ten-twenty system of the international federation. Electroencephalography and Clinical Neurophysiology. 1958;10:371–375.
  34. 34. Abrams D, Nicol T, Zecker S, Kraus N. Auditory brainstem timing predicts cerebral asymmetry for speech. The Journal of Neuroscience. 2006;26(43):11131–11137.
  35. 35. Hornickel J, Skoe E, Kraus N. Subcortical laterality of speech encoding. Audiology & Neuro-Otology 2009;14:198–207.
  36. 36. Elkabariti RH, Khalil LH, Husein R, Talaat HS. Speech evoked auditory brainstem response findings in children with epilepsy. International Journal of Pediatric Otorhinolaryngology. 2014;78(8):1277–1280.
  37. 37. Kösem A, Gramfort A, van Wassenhove V. Encoding of event timing in the phase of neural oscillations. NeuroImage. 2014;92:274–284.
  38. 38. Shamma S, Fritz J. Adaptive auditory computations. Current Opinion in Neurobiology. 2014;25:164–168.
  39. 39. Yamamuro K, Ota T, Iida J, Nakanishi Y, Matsuura H, Uratani M, et al. Event-related potentials reflect the efficacy of pharmaceutical treatments in children and adolescents with attention deficit/hyperactivity disorder. Psychiatry Research. 2016;30 (242):288–294.
  40. 40. Hornickel J, Lin D, Kraus N. Speech-evoked auditory brainstem responses reflect familial and cognitive influences. Developmental Science. 2013;16(1):101–110.
  41. 41. Hornickel J, Knowles E, Kraus N. Test-retest consistency of speech-evoked auditory brainstem responses in typically-developing children. Hearing Research. 2012;284(1–2):52–58.
  42. 42. Rocha-Muniz CN, Befi-Lopes DM, Schochat E. Investigation of auditory processing disorder and language impairment using the speech-evoked auditory brainstem response. Hearing Research. 2012;294(1–2):143–152.
  43. 43. Anurova I, Renier LA, De Volder AG, Carlson S, Rauschecker JP. Relationship between cortical thickness and functional activation in the early blind. Cerebral Cortex. 2015;25(8):2035–2048.
  44. 44. Song JH, Banai K, Russo NM, Kraus N. On the relationship between speech- and nonspeech-evoked auditory brainstem responses. Audiology and Neuro-Otology. 2006;11(4):233–241.
  45. 45. Mamo SK, Grose JH, Buss E. Speech-evoked ABR: effects of age and simulated neural temporal jitter. Hearing Research. 2015.
  46. 46. Engineer CT, Perez CA, Carraway RS, Chang KQ, Roland JL, Kilgard MP. Speech training alters tone frequency tuning in rat primary auditory cortex. Behav Brain Res. 2013;8, pp.1–10.
  47. 47. Tahaei AA, Ashayeri H, Pourbakht A, Kamali M. Speech evoked auditory brainstem response in stuttering. (Cairo): Scientifica.. 2014;2014:328646.
  48. 48. Hamaguchi K, Tschida KA, Yoon I, Donald BR, Mooney R. Auditory synapses to song premotor neurons are gated off during vocalization in zebra finches. eLife. 2014;3:e01833.
  49. 49. Anderson S, Parbery-Clark A, White-Schwoch T, Kraus N. Auditory brainstem response to complex sounds predicts self-reported speech-in-noise performance. Journal of Speech Language and Hearing Research. 2013;56(1):31–43.
  50. 50. Song JH, Banai K, Russo NM, Kraus N. On the relationship between speech- and nonspeech-evoked auditory brainstem responses. Audiology and Neurotology. 2006;11(4):233–241.
  51. 51. Gorga M, Abbas P, Worthington D. Stimulus calibration in ABR measurements. In: Jacobson J, editor. The auditory brainstem response. San Diego: College-Hill Press; 1985.
  52. 52. Galbraith G, Olfman D, Huffman T. Selective attention affects human brain stem frequency-following response. Neuroreport. 2003;14:735–738.
  53. 53. Jerger J, Hall J. Effects of age and sex on auditory brainstem response. Archives of Otolaryngology. 1980;106:387–391.
  54. 54. Sininger Y, Cone-Wesson B, Abdala C. Gender distinctions and lateral asymmetry in the low-level auditory brainstem response of the human neonate. Hearing Research. 1998;126(1–2):58–66.
  55. 55. Krizman J, Skoe E, Kraus N. Sex differences in auditory subcortical function. Clinical Neurophysiology. 2012;123(3):590–597.
  56. 56. Tremere L, Pinaud R. Brain-generated estradiol drives long-term optimization of auditory coding to enhance the discrimination of communication signals. The Journal of Neuroscience. 2011;31:3271–3289.
  57. 57. Killion M, Niquette P, Gudmundsen G, Revit L, Banerjee S. Development of a quick speech-in-noise test for measuring signal-to-noise ratio loss in normal-hearing and hearing-impaired listeners. The Journal of the Acoustical Society of America 2004;116:22395–22405.
  58. 58. Hayes E, Warrier C, Nicol T. Neural plasticity following auditory training in children with learning problems. Clinical Neurophysiology. 2003;114:673–684.
  59. 59. Russo NM, Nicol TG, Zecker SG, Hayes EA, Kraus N. Auditory training improves neural timing in the human brainstem. Behavioural Brain Research. 2005;156(1):95–103.
  60. 60. Anderson S, Parbery-Clark A, White-Schwoch T, Kraus N. Aging affects neural precision of speech encoding. Journal of Neuroscience. 2012;32(41):14156–14164.
  61. 61. Wible B, Nicol T, Kraus N. Atypical brainstem representation of onset and formant structure of speech sounds in children with language-based learning problems. Biological Psychology. 2004;67(3):299–317.
  62. 62. Ziegler J, Pech-Georgel C, George F, Lorenzi C. Speech-perception in-noise deficits in dyslexia. Developmental Science. 2009;12:732–745.
  63. 63. Bogliotti C, Serniclaes W, Messaoud-Galusi S, Sprenger-Charolles L. Discrimination of speech sounds by children with dyslexia: comparisons with chronological age and reading level controls. Journal of Experimental Child Psychology. 2008;101:137–155
  64. 64. Hornickel J, Kraus N. Unstable representation of sound: a biological marker of dyslexia. The Journal of Neuroscience. 2013;33 (8):3500–3504.
  65. 65. Owen, M. J., Norcross-Nechay, K., & Howie, V. M. (1993). Brainstem auditory evoked potentials in young children before and after tympanostomy tube placement. Journal of Pediatric Otorhinolaryngology: 25, 105–117.
  66. 66. Lukhanina EP, Karaban IN, Burenok Iu A, Mel'nik NA, Berezetskaia NM. Effect of cerebrolysin on the electroencephalographic indices of brain activity in Parkinson's disease. Zhurnal Nevrologii i Psikhiatrii Imeni S.S. Korsakova. 2004;104(7):54–60.
  67. 67. Li M, Kuroiwa Y, Wang L, Kamitani T, Omoto S, Hayashi E, et al. Visual event-related potentials under different interstimulus intervals in Parkinson's disease: relation to motor disability, WAIS-R, and regional cerebral blood flow. Parkinsonism & Related Disorders. 2005;11(4):209–219.
  68. 68. Jiang C, Kaseda Y, Kumagai R, Nakano Y, Nakamura S. Habituation of event-related potentials in patients with Parkinson's disease. Physiology and Behavior. 2000;68(5):741–747.
  69. 69. Wang L, Kuroiwa Y, Li M, Kamitani T, Wang J, Takahashi T, et al. The correlation between P300 alterations and regional cerebral blood flow in non-demented Parkinson's disease. Neuroscience Letters. 2000;282(3):133–136.
  70. 70. Chhetri S. Acute otitis media: a simple diagnosis, a simple treatment. Nepal Medical College Journal. 2014;16:33–36.
  71. 71. Kong K, Coates H. Natural history, definitions, risk factors and burden of otitis media. Medical Journal of Australia. 2009;191:S39–S43.
  72. 72. Bess F, Humess L. Patologias do sistema auditivo. Porto Alegre: Artmed: Fundamentos de audiologia; 1998. pp. 155–195.
  73. 73. Bahare K, Farhad F, Maryam E, Zahra HD. Auditory processing abilities in children with chronic otitis media with effusion. Acta Oto-Laryngologica. 2016:136. pp. 456–459.
  74. 74. Katz J, Tillery K, Mecca F, (tradução). Uma introdução ao processamento auditivo. In: Lichitg I, Carvallo R, editors. Abordagens atuais. São Paulo: Pró-Fono; 1997. pp. 145–172.
  75. 75. Borg E, Risberg A, McAllister B, Undermar B, Edquist G, Reinholdson A, et al. Language development in hearing-impaired children. Establishment of a reference material for a language test for hearingimpaired children. LATHIC. International Journal of Pedriatric Otorhinolaryngology. 2002;65(1):15–26.
  76. 76. Shriberg L, Flipsen PJ, Thielke H, Kwiatkowski J, Kertoy M, Katcher M, et al. Risk for speech disorder associated with early recurrent otitis media with effusion: two retrospective studies. Journal of Speech, Language, and Hearing Research. 2000;43(1):79–99.

Written By

Milaine Dominici Sanfins, Piotr H. Skarzynski and Maria Francisca Colella-Santos

Submitted: 31 May 2016 Reviewed: 06 October 2016 Published: 29 March 2017