Open access peer-reviewed chapter

Speech-Evoked Brainstem Response

By Milaine Dominici Sanfins, Piotr H. Skarzynski and Maria Francisca Colella-Santos

Submitted: May 31st 2016Reviewed: October 6th 2016Published: March 29th 2017

DOI: 10.5772/66206

Downloaded: 1359

Abstract

The auditory brainstem response (ABR) is a clinical tool to assess the neural functionality of the auditory brainstem. The use of verbal stimuli in ABR protocols has provided important information of how the speech stimuli are processed by the brainstem structure. The perception of speech sounds seems to begin in the brainstem, which has an important role in the reading process and the phonological acquisition speech ABR assessment allows the identification of fine-grained auditory processing deficits, which do not appear in click evoked ABR responses. The syllable /da/ is commonly used by speech ABR assessment due to it being considered a universal syllable and allows it to be applied in different countries with good clinical assertiveness. The speech ABR is a objective, fast procedure that can be applied to very young subjects. It be utilized in different languages and can provide differential diagnoses of diseases with similar symptoms, as an effective biomarker of auditory processing disorders present in various diseases, such as dyslexia, specific language impairment, hearing loss, auditory processing disorders, otitis media, and scholastic difficulties. Speech ABR protocols can assist in the detection, treatment, and monitoring of various types of hearing impairments.

Keywords

  • speech
  • speech perception
  • electrophysiology
  • frequency-following response
  • speech-ABR

1. Introduction

The auditory processing information can be analyzed by an assessment of the auditory evoked potentials (AEP). Among the different types of AEPs, there is the auditory brainstem response (ABR) The ABR is a clinical tool to assess the neural functionality of the auditory brainstem [1]. Until recently, assessment using clinical ABR protocols was carried out only with nonverbal stimuli, such as clicks, tone-bursts, and chirps. The ABR responses (i) permit the analysis of the integrity of the auditory pathways and (ii) can establish electrophysiological thresholds in order to identify basic neural abnormalities and to evaluate patients who did not provide reliable responses in the standard behavioral audiological assessment [2].

Although the use of the click-evoked ABR has been widely used clinically, it is still necessary to unravel how verbal sounds are coded in the brainstem. Recent technological advances have enabled the inclusion of verbal stimuli in the ABR commercial equipment. The use of verbal stimuli in ABR protocols provided important information of how the speech stimuli are processed by the brainstem structure, which actively participates in the analysis of the complex verbal stimuli [3].

The verbal stimulus most widely used in speech ABR is a syllable composed of a consonant and a vowel (CV) [4]. The consonant perception is performed by the distinction between vocal production times and sound of consonant that guarantees the intelligibility in the process of human communication and the proper development of language.

The perception of speech sounds seems to begin in the brainstem, which has an important role in reading process and phonological acquisition [57]. An effective and objective way to investigate this process will be the assessment of speech ABR that allows the identification of fine-grained auditory processing deficits associated with real-world communication, skills which do not appear in click responses, and it also can be used for early identification of auditory processing impairments in very young children [8]. Above all, speech ABR can be used as an objective measure of the hearing function. One of the great advantages of this method is that it is not influenced by environmental issues, which can disrupt the behavioral assessments [2]. Even the best behavioral tests can confound the subject by factors such as attention, motivation, alertness/fatigue, and by co-occurring disorders, such as language impairments, learning impairments, or attention deficits [9].

Understanding the neural processing of speech sounds at the brainstem level provides knowledge regarding the central auditory processes involved in normal hearing subjects and also in clinical populations [10]. Moreover, altered responses of speech ABR may be associated with impaired speech perception in noise. These changes can cause a negative impact on communication and serious consequences for academic success [8]. According to Sinha and Basavaraj [11], the major application of speech ABR can be in diagnosing and categorizing children with learning disability in different subgroups, assessing the effects of aging on central auditory processing of speech, assessing the effects of central auditory deficits in hearing aid, and cochlear implant users.

2. Assessment

Speech ABR has an important feature, that is, the specific aspects of acoustic signal are preserved and reflects the neural coding in figure [representation of 40 ms of syllable /da/ (gray) stimulus and responses (black)] [4]. Furthermore, this assessment permits to understand the neural basis of the auditory system, even if it is normal or deficient stimulus and responses (black)] [4] (Figure 1).

Figure 1.

Representation of 40 ms of syllable /da/ (gray) stimulus and responses (black) [4].

The verbal stimulus used in the speech ABR assessment, normally, is the syllables /ba/, /da/, or /ga/. The verbal assessment provides information about how the speech syllable is encoded by auditory system. The trace of the speech ABR response can be dismembered in two parts: the onset and the frequency following response (FFR). The first part represents the consonant and the second part the vowel [10].

The best-known model used is elicited with the synthesized syllable /da/ provided by a computer software. The use of synthesized speech allows acoustic parameters to be controlled and constant and ensures the quality of the stimulus that will be presented to the listener and/or the patient [12]. This stimulus modality was developed by the group of Dr. Nina Kraus at Northwestern University. The stimulus consisted of the consonant /d/ (transient portion—onset) and the short vowel /a/ (sustained portion—following frequency response). When elicited by the stimulus /da/, the subcortical response emerges as a waveform of seven peaks—V, A, C, D, E, F, and O—wherein the single wave with a positive peak is the complex of wave V. Waves V and A reflect the onset response, wave C the transition area, waves D, E, and F the periodic area (the frequency following response), and wave O the offset response (Figure 2) [4, 13, 14]. A typical response is shown in Figure 2 (electrophysiological response representation of synthesized syllable /da/. Investigator’s personal data based on the assessment of a normal hearing, performed with the BioMARKTM software) [13].

Figure 2.

Electrophysiological response representation of synthesized syllable /da/. Investigator’s personal data based on the assessment of a normal hearing, performed with the BioMARKTM software [13].

It is important to describe that the onset component seems to be elicited around 10 ms and is considered the transient portion of sound stimulus reflecting the decoding of fast temporal changes inherent in the consonant [15]. The component FFR is called sustained portion and seems to be elicited around 18–50 ms. This component reflects the encoding of periodic and harmonic structure of vowel sound related to harmonic vowels [11] and is also related to encoding of the elements of fundamental frequency and its modulations (first and second formants) [4, 15].

Another feature of speech ABR responses is that there is no variation between intra and inter subject, maintaining stable the morphological characteristics [16, 17].

The speech sounds are present more frequently in the daily lives of every human being. A long-term of auditory experience can improve the performance of the whole auditory system. Therefore, a subject who has a good processing of speech sounds has better electrophysiological responses for this type of stimulus, showing that auditory experience might modify the basic sensorial coding of the whole auditory pathway [1821]. On the other hand, a subject who has auditory deprivation may have significant electrophysiological changes in the auditory system, as can be seen in children with history of otitis media.

3. Parameters

There are several searches about the coding processing of verbal sound occurs and to insert speech ABR as part of clinical routine.

The syllable /da/ is commonly used speech for ABR assessment due to it being considered a universal syllable and allows it to be applied in different countries with good clinical assertiveness [4]. However, previous studies show that there is difference response in subjects from different culture [22] since each language has its own characteristics and peculiarities that can contribute or not to a suitable processing of speech sounds.

The majority of the studies was performed with native English speakers, which is explained by the fact that Dr. Kraus, the leading researcher and creator of the speech stimulus, did her work at Northwestern University, USA [1, 4]. However, additional studies have been initiated in numerous languages such as Arabic, Brazilian Portuguese, French, Greek, Hebrew, Indian, Japanese, and Persian [1, 11, 13, 2232].

In each laboratory and/or institution, researchers choose their own parameters that will be applied on clinical investigation. Below are some items that should be thought about at the time of creation of the assessment parameters.

3.1. Equipment and software

Sanfins and Colella-Santos analyzed which equipments and software were often used for assessment of speech ABR. Biologic Navigator Pro (Natus) is the most used equipment followed by Neuroscan equipment (Biolink). As regards the software, the BioMark (Biological Marker of Auditory Processing) and Neuroscam Stim 2 are the main packages available [1].

3.2. Electrode montage

The position of the electrodes follows the traditional ABR assessment (click ABR). Neurophysiological responses can be recorded with an active electrode positioned on the vertex (Cz), the reference electrodes on the ipsilateral mastoid, and the ground on the contralateral mastoid, using one channel with surface electrodes fixed, according to the 10–20 system [33]. Automatic switching function of the reference signals and the amplifier ground based on the stimulated ear should be activated on the equipment. The electrode on the left ear can be connected to input 2/channel 1, and the electrode on the right ear can be connected to ground connection cable. During the recording session, impedance should be maintained at below 5 kΩ and inter-electrode impedance below 3 kΩ [22].

3.3. Stimulated ear

Research shows that there is an asymmetry for the auditory processing of verbal sounds that occur in the brainstem and extend to auditory cortex when evaluating the differences between the responses obtained from the presentation of acoustic stimuli on the right and left ears [34, 35].

Regarding the stimulated ear, the great majority of studies performed the assessment of ABR with speech stimuli elicited only on the right ear, which can be explained by the advantage of right ear in encoding speech by contralateral projection to the left hemisphere [24, 26, 29, 31, 32, 3644].

However, some researchers have written that stimulus presentation can be performed on the ear with better threshold confirmed by pure tone audiometry [45]. In a systematic review about the applicability of speech ABR [1], it was possible to see that in 14.3% of articles, stimulation was performed monaurally; however, between the left ear and right ear stimulation, there is scientific evidence that even if there is a proven right ear advantage in the processing of speech, the left ear can participate in this process, but with less intense electrophysiological responses [28, 36, 46]. Therefore, an analysis of responses from both ears could help in the diagnosis process as well as therapeutic monitoring.

Importantly, there is a tutorial about ABR of complex sounds that notify that the monaural stimulation is preferred for children, while the binaural stimulation is more realistic than monoaural [4].

Ahadi et al. [25] presented the sound stimulus on three conditions: monaural right, monaural, and binaural left. They showed that the magnitude and strength of speech ABR responses depend on the stimulus presentation mode, and the binaural presentation of speech syllable enables better visualization of the response, however,

3.4. Stimulus

The speech ABR assessment allows to apply different types of sound stimulus. The syllable /da/ is most well known and applied more often in studies [11, 13, 15, 22, 25, 28, 29, 32, 36, 39, 45, 47]. However, there are researchers who used disyllables as /baba/ [27] or even other syllables composed by consonant-vowel as /ba/ [23, 30, 31].

The presentation rate parameter is related to the duration of the sound stimulus; in the case of speech ABR, it is related to the size of the sound stimulus speech. The frequent value found in the studies is 10.9/s, however, no reports of the use of 11.1/s. In a study of literature review, it is noted that in about 19% of the previous studies on the assessment of speech ABR, this parameter is not described by the researchers [1].

Considering the length parameter, it is observed that the most frequently found values were 40 and 170 ms [1]. There is a relation between the presentation rate and duration, so the higher, the shorter will be the presentation rate [45, 48]. Song et al. [16] used both acoustic stimuli and concluded that short (40 ms) and long stimulus (170 ms) reflect the coding of speech in the brainstem in a reliable way, thus enabling that neural changes can be monitored through an objective electrophysiological measure.

The type of polarity of the sound stimulus is one of the most consistent parameters across studies on the assessment of speech ABR. Approximately, 90.5% of the previous studies have used alternating polarity [13, 22, 23, 28, 29, 39, 45, 47, 49, 50]. The choice for this type of polarity should be the reduction of artifacts and cochlear microphonic [51].

Regarding the intensity used in the assessment of speech, ABR suggests the use of 60–85 dB SPL [4, 15]. It is noted that, as it is an assessment process, the sound should be applied in an audible and comfortable intensity to the patient. The majority of studies has used the intensity of 80 dB SPL [1].

The speech stimulus requires approximately 4000 and 6000 sweeps in order to get a robust and replicable response, differently, the click stimulus or tone burst that needs around 2000 sweeps to get a good quality of response [4]. The number of sweeps is one of the most diverse parameters across studies [1]; however, the majority of researches used two blocks of 3000 free sweeps artifacts [13, 22, 25, 28, 36, 3941, 47, 49]. Both trials were averaged to create a calculated wave of 6000 sweeps. The traces of both recordings were added, and the responses of the resultant waves were identified and analyzed in Figure 3 (electrophysiological response representation of two blocks of 3000 sweeps and calculated wave of 6000 sweeps. Investigator’s personal data based on a subject’s assessment performed with BioMARKTM software).

Figure 3.

Electrophysiological response representation of two blocks of 3000 sweeps and calculated wave of 6000 sweeps. Investigator’s personal data based on a subject’s assessment performed with BioMARKTM software.

3.5. Transducer

The literature recommends that earphones are not to be used once this device can increase the chances of artifacts. Thus, the recommendation is to use the insert earphones. In cases of insert earphones are not possible to be used, there is the possibility to do the test with loudspeakers. It is important to consider that the responses are not so reliable as ones with insert earphones. The evaluator should be very careful in positioning the patient and the loudspeakers, and these loudspeakers should be equidistant between the right and left ears [4]. In addition, previous study has presented speech stimulus through individual hearing aid with excellent results with free and high-quality artifact [23].

3.6. Assessment of condition

As the traditional ABR assessment, the patients are instructed to keep their bodies relaxed with no movements in order to minimize the myogenic artifacts. [24]

Researchers reinforce that the attention can influence the FFR portion of speech sounds [52]. Therefore, the majority of researches has allowed the patient to watch a movie with reduced sound intensity or with subtitle [16, 23, 40, 41, 50], which seems that it keeps them quiet and relaxed during the assessment. Other researchers allow the patient to choose between watching a movie or sleep during the assessment process [24, 45].

Different parameters are being used. The parameters most cited in the literature about the assessment of speech ABR and with good clinical results are presented below in Chart 1 (Speech ABR parameters). Note that there is a well-written tutorial by Skoe and Kraus [4] with detailed, clear, and objective information about the functioning and clinical application of speech ABR. This tutorial can be a material support to those interested in unraveling this new and effective electrophysiological assessment method.

ParametersSetting
EquipmentBiologic navigator pro
SoftwareBioMARK
Electrode montageCz, M1, and M2
Stimulated earRight ear
StimulusSpeech
Stimulus typeSyllable /da/
Stimulus duration40 ms
Stimulus polarityAlternating
Stimulus intensity80 dB SPL
Stimulus rate10.9/sec
Number of sweeps6000
ReplicabilityTwice for 3000 sweeps
TransducerInsert
Assessment conditionWatch a movie

Chart 1.

Speech ABR parameters.

4. Criteria of normality

Before presenting the criteria of normality, it is important to understand the influence of the maturational process and gender in response to speech ABR.

4.1. Maturation

The response of ABR with nonverbal stimulus is mature around 18 months, while the speech ABR appears to be mature by the age of 5 [10]. This way a procedure can be used in young and school-age children, helping in the differential diagnosis of diseases with similar symptoms [14]. Further studies are being conducted to regulate the normal values for different age range and confirm the age of maturation of central auditory system for verbal sounds.

According to Yamamuro et al. [39], age affects the coding of sounds by a single stimulus or the complex and neural timing and auditory skills are improved over the years. The responses of speech ABR in a child of 5 years are not so different from a child’s responses in the age group of 8–12 years , whereas a child’s responses in the age group of 3–4 years are very different in morphological aspect as related to the latency time.

SourceLatency (ms)Amplitude (μv)VA measures
TestRetestTestRetestTestRetest
MeanSDMeanSDMeanSDMeanSD
V6.650.276.680.270.130.050.130.04
A7.620.357.620.370.200.060.210.06
C18.600.6818.470.680.030.060.030.05
D22.670.5922.670.580.130.070.140.07
E31.120.5331.20.570.220.060.210.07
F39.700.5739.710.500.140.090.130.08
O48.260.4348.340.390.150.060.160.06
Slope VA (μv/ms)0.350.110.370.12
Area VA (μv × ms)0.160.050.150.05

Table 1.

Parametric study (mean and standard deviation) by syllable /da/, 40 ms, (silence) performed in adults with normal hearing (Song et al. [16]) on the right ear in two different conditions (test and retest).

Song et al. [16] performed their study with 45 adults with normal hearing (29 females) (19–36 years old; 24.5 ± 3.0).


Note: Parametric study in normal adults.


4.2. Gender influence

Previous studies of literature have shown that there are differences of responses in the auditory perception between genders with better performance in female in the entire trajectory of the peripheral auditory and central nervous system [53, 54]; however, when the focus of analysis is the speech ABR, it was observed that women have better responses (higher values for amplitudes and lower values for latencies), and it was only the initial portion of the speech stimuli of the coding process when compared to men [55]. Differences in speech ABR responses between genders were explained by the premise that the synapses of the afferent and efferent systems of the auditory system are strongly influenced by the hormone estrogen activity [56].

4.3. Normative data

There are some studies that are used as parametric models for the analysis of speech ABR. Normative data for young adults (19–36 years old) with normal hearing and analysis of all the waves are presented in Table 1 (parametric study [mean and standard deviation] by syllable /da/, 40 ms, [silence] performed in adults with normal hearing—Song et al. [16]—on the right ear in two different conditions—test and retest [16]). Two studies for children and adolescents will be presented: (i) composed of children between 8 and 12 years of age with normal hearing and with the analysis of waves V, A, C, and F and VA complex in Table 2 (parametric study [mean and standard deviation] by syllable /da/, 40 ms, [silence] performed in children with normal hearing—Russo et al. [15]—on the right ear [15]) and (ii) composed of children and adolescents between 8 and 16 years of age with normal hearing and examination of all the waves in Table 3 (parametric study [mean and standard deviation] by syllable /da/, 40 ms [silence] performed in children and adolescent with normal hearing—Sanfins et al. [22]—on the right and left ears [22]).

SourceLatência (ms)Amplitude (μv)VA measures
Right earRight earRight ear
MeanSDMeanSDMeanSD
V6.610.250.310.15
A7.510.340.650.19
C17.690.480.360.09
F39.730.610.430.19
Slope VA (μv/ms)0.130.05
Area VA (μv × ms)1.701.23

Table 2.

Parametric study (mean and standard deviation) by syllable /da/, 40 ms (silence) performed in children with normal hearing (Russo et al. [15]) on the right ear.

Russo et al. [15] studied 36 and 38 children and adolescent (17 females) with normal hearing (8–12 years old).


Note: Parametric study in normal children.


The majority of studies about speech ABR assessment was performed with monoaural stimulus on the right ear [13, 24, 29, 39, 49, 50]. The choice for the assessment only on the right ear is related to the advantage of the left hemisphere for processing of language sounds. Associated with this fact, earlier research has shown that there are no statistically significant differences between the responses on the right and left ears in subjects with normal hearing and typical development. However, there are many conditions to be studied through the speech ABR, and it is important to consider whether there are differences in responses between the ears.

Thereby, the responses on the right and left ears were presented in the population of children and adolescents with normal hearing and normal development so that it can be used as a comparison with the responses obtained in subjects with different pathologies.

It is noted that the parametric studies provide a direction to the researchers. It is fundamental to know the parameters of collection and analysis of each reference author before the use of this data. Each research center or clinic should carry out its own normative study for the different age groups.

SourceLatency (ms)Amplitude (μv)VA measures
Right earLeft earRight earLeft earRight earLeft ear
MeanSDMeanSDMeanSDMeanSD
V6.500.216.510.210.120.060.110.06
A7.460.337.480.360.220.090.210.07
C18.330.4218.410.460.100.080.110.10
D22.210.6622.360.440.140.090.130.08
E30.890.5030.780.610.300.390.230.09
F39.370.5539.200.470.240.290.190.09
O48.000.7547.950.540.210.300.160.12
Slope VA (μv/ms)0.370.140.350.13
Area VA (μv × ms)0.330.130.310.13

Table 3.

Parametric study (mean and standard deviation) by syllable /da/, 40 ms (silence) performed in children and adolescent with normal hearing (Sanfins et al. [22]) on the right and left ears.

Sanfins et al. [22] studied 40 children and adolescent (25 females) with normal hearing (8–16 years old; 10.95 ± 2.0).


Note: Parametric study in normal children and adolescent.


5. Clinical applicability

5.1. Auditory training

Auditory training is able to induce neurophysiological changes that can be observed by an evaluation of speech ABR. According to Killion et al. [57], an auditory training program promotes gains in both speech perception in quiet environments such as in noisy environments and improves short-term memory skills and attentional processes.

According to Hayes et al. [58] children with learning problems can benefit from an auditory rehabilitation program through auditory training. Research has shown that these children have a delay in responses of speech ABR, more specifically, the values of onset portion—wave A, and the assessment of speech ABR may be able to ascertain whether the auditory training program was effective, monitoring the benefits of rehabilitation in children and in young adults [15, 16].

Further studies are needed in the elderly population to determine if this type of assessment can be effective in monitoring this population. Anderson et al. [49] reported that the elderly usually have a hearing loss, thus an auditory training program should be recommended along the selection and adaptation of hearing aid suitable for need each elderly.

Auditory training and amplification are ideal to improve the auditory function and, especially, to improve the process of speech perception. In this context, the assessment of speech ABR could have an important role to demonstrate quickly, clearly, and objectively what are the real gains of interventions. Researchers have emphasized that the assessment of speech ABR is considered a biological marker of auditory training, being able to identify subjects who will have the benefit of an auditory training program [58, 59].

5.2. The aging process

The elderly has a reduced neural synchrony in the encoding of speech sounds, especially when the speech sounds are produced in the presence of background noise. The assessment of speech ABR is able to monitor the difficulty in understanding speech in noise reported by the elderly. The fitting process allows speech sounds to be heard more clearly. Thus there has been a change of morphology and the latency values of the speech responses ABR [24, 36, 45].

5.3. Differential diagnosis

Research shows that the literacy process depends on an efficient functioning of the auditory processing in the brainstem. The assessment of speech ABR could accurately predict early and possible changes in the processes of reading, writing, and literacy in preschool children [41, 60, 61].

Children with learning, speech, and hearing impairments not only suffer from background noise and competitive sounds but also have some difficulty in the perception of speech sounds in quiet environments [62]. This difficulty can be arising from changes in temporal processing that can impact the perception of speech. In this context, the speech ABR is a biological marker of auditory processing disorder, being able to identify children with predisposition to these changes [4].

Children with dyslexia often have impairments in the perception of speech sounds that can affect their reading skills [63]. According Hornickel and Kraus [64] good readers have a stable neural representation of sound and that children who have inconsistent neural responses are likely at a disadvantage when learning to read. Thus, the speech ABR can help identify and separate these children, enabling a more appropriate intervention.

Besides that, another application of speech ABR can be in diagnosing and categorizing children with learning disability in different subgroups, assessing the effects of aging on central auditory processing of speech, and assessing the effects of central auditory deficits in hearing aid and cochlear implant users [11].

Understanding the neural processing of speech sounds at the brainstem level may provide knowledge regarding the central auditory processes involved in normal hearing subjects and also in clinical populations [10]. Moreover, altered responses of speech ABR may be associated with impaired speech perception in noise. These changes might have a negative impact on communication and have serious consequences for academic success [8].

5.4. Musician

Currently, there is an increasing interest in the influence of musical experience related to language processing. The intense musical training in the long term seems to cause an anatomical and physiological change and improves the working memory in cognitive processes, the control of emotions, and perception of sound stimuli [65].

The brain stem has an important role in the encoding of speech sound stimuli and temporal processing [66]. Temporal processing contributes to the perception of duration of the consonants and the identification of notes and musical scales [66, 67]. The literacy process, including the process of reading, writing, and language, is also influenced by the temporal processing [68]. The detection of small and rapid changes of the sound stimulus is associated with the rhythm, the frequency of the sound stimulus, phonemic discrimination, duration, and discrimination of pitch [69]. Understanding how music influences the encoding of speech sounds can be used for more information about the learning process [64]. One way to analyze this is through the responses of speech ABR.

5.5. History of otitis media

Otitis media is one of the most common childhood diseases, affecting about two-third of children in the first 5 years of life [70, 71]. This period is important for the development of oral and written language. Otitis media can cause functional sequelae of the middle ear structures and can induce a temporary mild-to-moderate hearing loss. The latter can remain for a few days or for several weeks [72, 73]. Concomitantly, the accumulation of fluid in the middle ear interferes the speech perception, causing a distortion in the perception of acoustic signals and reduces the speed and accuracy of verbal decoding [74]. When hearing fluctuation occurs early in life, that is the critical period for linguistic development, a limited acquisition of speech and language occurs. As a result communication problems may appear, such as language developing impairment, auditory processing deficits, cognitive impairment, and psychosocial development and impairment in the acquisition of literacy skills [75, 76].

Inadequate auditory stimulation in childhood can lead to long-term alterations of the auditory structures in the central auditory nervous system [73]. Research shows that children suffering from secretory otitis media in their first 6 years of age and underwent a surgery for bilateral ventilation tubes placement demonstrates neurophysiological modifications of speech perception when compared with typically developing children and adolescents.

6. Conclusion

The assessment of speech ABR could accurately predict early and possible changes in the processes of reading, writing, and literacy in preschool children.

The speech ABR is objective, fast, and can be applied from early childhood. It is equally effective in different languages and can provide differential diagnoses of diseases with similar symptoms, as an effective biomarker of auditory processing disorders that may be present in various diseases, such as dyslexia, specific language impairment, hearing loss, auditory processing disorders, otitis media, and scholastic difficulties.

It is a science with great possibility of research with different approaches to assist in detection, treatment, and monitoring of various diseases.

Acknowledgments

This work was supported by the Project “Integrated system of tools for diagnostics and telerehabilitation of sensory organs disorders (hearing, vision, speech, balance, taste, smell)” acr. INNOSENSE, co-financed by the National Centre for Research and Development (Poland), within the STRATEGMED program.

© 2017 The Author(s). Licensee IntechOpen. This chapter is distributed under the terms of the Creative Commons Attribution 3.0 License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

How to cite and reference

Link to this chapter Copy to clipboard

Cite this chapter Copy to clipboard

Milaine Dominici Sanfins, Piotr H. Skarzynski and Maria Francisca Colella-Santos (March 29th 2017). Speech-Evoked Brainstem Response, Advances in Clinical Audiology, Stavros Hatzopoulos, IntechOpen, DOI: 10.5772/66206. Available from:

chapter statistics

1359total chapter downloads

More statistics for editors and authors

Login to your personal dashboard for more detailed statistics on your publications.

Access personal reporting

Related Content

This Book

Next chapter

Wideband Tympanometry

By Thais Antonelli Diniz Hein, Stavros Hatzopoulos, Piotr Henryk Skarzynski and Maria Francisca Colella-Santos

Related Book

First chapter

Attention and Working Memory in Human Auditory Cortex

By Brian Barton and Alyssa A. Brewer

We are IntechOpen, the world's leading publisher of Open Access books. Built by scientists, for scientists. Our readership spans scientists, professors, researchers, librarians, and students, as well as business professionals. We share our knowledge and peer-reveiwed research papers with libraries, scientific and engineering societies, and also work with corporate R&D departments and government entities.

More About Us