Open access

The Importance of Acoustic Reflex in Speech Discrimination

Written By

Kelly Cristina Lira de Andrade, Silvio Caldas Neto and Pedro de Lemos Menezes

Submitted: 21 October 2010 Published: 13 June 2011

DOI: 10.5772/17480

From the Edited Volume

Speech Technologies

Edited by Ivo Ipsic

Chapter metrics overview

4,531 Chapter Downloads

View Full Metrics

1. Introduction

Communication is essential for social interaction and may interfere significantly with school, professional performance or social communication. When we speak we allow others to know our thoughts, feelings and needs and them to know ours.

Hearing, in turn, is the main sense responsible for acquiring speech and language in children. Impairment of this function may compromise not only language, but also social, emotional and cognitive aspects (Tiensoli et al., 2007).

Advertisement

2. Noise

The word noise comes from the latin “rugitus”, which means roar. Acoustically it is composed of various sound waves in relation to anarchically distributed amplitude and phase, causing an unpleasant sensation, in contrast to music. It is any undesirable sound disturbance that interferes with what one wishes to hear. From the physics standpoint, noise is defined as a sound composed of a large number of randomly distributed acoustic vibrations with amplitude and phase relationships (Ferreira, 1975).

Noise can be divided into two categories: impact or impulsive and continuous or intermittent. In the first case, it represents peaks of acoustic energy of less than one second, at intervals greater than one second. Intermittent noise is all and any noise that is not classified as impact or impulsive.

Nepomuceno (1994) defines noise as an audible phenomenon whose frequencies cannot be discriminated because they differ between one another by lower values than those detected by the auditory device. It is responsible for the production of unpleasant auditory sensations and therefore different from sound. Noise is an interesting study subject because it directly affects the health of people, disqualifies the environment in which they live and causes social problems, in that its effects alter and degrade social relationships (Souza, 2004).

Speech intelligibility is defined as the relationship between words spoken and understood, expressed in percentage. For communication to be effective and intelligible, speech intelligibility must be higher than 90% (Nepomuceno, 1994). In noisy environments such as companies, schools, means of transportation, recreation areas or any other place with a large number of people, communication is significantly compromised.

Advertisement

3. Acoustic reflex

The hearing mechanism is described as consisting of three divisions: outer, middle and inner ear. Each compartment of the ear has the particular function of allowing sound to be transmitted, amplified and finally transformed into electrical stimulus that are conveyed to the cortex by the auditory nerve. Middle ear structures include the tensor tympani muscle and the stapedius muscle, the smallest striated muscles in the body.

Figure 1.

Middle ear

Contraction of stapedius and tympani muscles may also occur by extra acoustic excitation such as painful and tactile sensations. Voluntary contraction might occur in some subjects. Bujosa (1978) found that stimuli triggering stapedius muscle reflexes can be sonorous (unilateral, contralateral, ipsilateral and bilateral), tactile (in the skin of the tragus and upper eyelid) and electric (these are not observable in otitis media, osteosclerosis, facial paralysis, and sonorous cases). Stimuli that trigger tensor tympani reflexes are sonorous, tactile (airflow) and nasal mucosa stimulation with ammonia vapors.

Several studies have been conducted on the function of intratympanic muscles and some correlate it with activities of hearing centers. Lidén, Peterson & Hartford (1970) state that for acoustic stimuli, the primary response is from the stapedius muscle and that it exhibits lower reflex threshold to sound pressures than the tympani tensor. Observing the activity of the middle ear muscles is the function of the method used and, using imitanciometry, only stapedius muscle function can be determined with precision. For this reason, the acoustic reflex is denominated the stapedius reflex (Lopes Filho, 1975).

The neural complex of the acoustic reflex is located in the lowest part of the brainstem. In the ipsilateral pathway, the acoustic stimulus is transmitted from the cochlea to the acoustic nerve and then to the ipsilateral ventral cochlear nuclei, and in turn, through the trapezoid body to the facial motor nucleus and ipsilateral stapedius. In the contralateral pathway, transmission is from the ventral cochlear nuclei to the medial superior olivary complex, crossing to the facial motor nucleus, then proceeding to the facial nerve and contralateral stapedius. It is believed that this description, although the most accepted, involves much more complex multisynaptic connections, where upper auditory pathways act on the acoustic reflex, inhibiting or increasing it (Borg, 1973; Robinette & Brey, 1978; Downs & Crum, 1980; Northern, Gabbard, Kinder, 1989; Northern & Downs, 1989).

Motor neurons are located near, but not within the facial nucleus. They are separated anatomically into functional bundles and are capable of ipsi, contra or bilateral stimulus responses. By stimulating one ear, these connections can obtain bilateral responses from the facial nerve and consequently from the stapedius. In 1992, Colleti et al observed that pathologies of the ascending auditory pathways may interfere in the acoustic reflex, where activity is also thought to be regulated by cortical and subcortical structures. Thus, the acoustic reflex and the cochlear-olivary system would receive tonic facilitative influence from hearing centers (Carvallo, 1997a). Therefore, for acoustic reflex to occur, there must be integrity of the afferent, association and efferent pathway.

Figure 2.

Noise spectrum of speech

Andrade et al (2011) conducted a study involving 18 female participants, divided into two groups: acoustic reflexes present (20 ears) and absent (16 ears). A total of 180 disyllabic words were presented, 90 for each ear, randomly emitted at a fixed intensity of 40 dB above the three-tone average. At the same time, three types of noise were presented ipsilaterally (white, pink and speech), one at a time, at three intensities: 40, 50 and 60 dB above the three-tone average.

The three types of noises presented at intensities of 40 and 50 dB above the three-tone average showed better efficiency in discriminating speech in the group with acoustic reflexes present. For the intensity of 60 dB, using white and pink noise, a similar percentage of hits was observed between the two study groups. The better performance in discriminating at all intensities, including at 60dB, was found with the use of speech noise. The study concluded that the presence of reflex helps discriminate among speech sounds, especially in noisy environments.

White noise, described in the aforementioned study, consists of frequencies between 10 and 10,000 Hz, with equal intensity and energy maintained at high frequencies and efficient up to 6000 Hz. Pink noise is filtered white noise composed of frequencies between 500 and 4000 Hz, a band where it is most effective (Menezes, 2005).To construct speech noise, a recording was made in which several individuals were talking at the same time, simulating a noisy environment. Frequencies ranged only from 0 to 4 kHz. The noise spectrum was analyzed and fast Fourier transform is shown below.

Advertisement

4. New findings

A recent study conducted by Andrade et al (2011) compared three groups of normal hearers with pure-tone auditory threshold up to 20 dBHL for frequencies of 250, 500, 1000, 2000, 4000, 6000 and 8000 Hz dBNA (ANSI -1969), with inter-ear differences for frequencies less than or equal to 10 dB and age between 18 and 55 years. The following exclusion criteria were established: exposure to occupational or leisure noise, ear surgery, more than three ear infections within the last year, use of ototoxic medication and hereditary cases of deafness. In the control group the subjects showed reflexes up to 90 dBSPL (70 to 90 dB). In the second group, the normal hearers exhibited reflexes between 95 and 110 dBSPL (Group 2) and in the third group, the subjects had no stapedius reflex (Group 3). Each group consisted of 8 female participants, totaling 16 ears per group. As with Andrade (2011), speech noises were used at intensities of 40, 50 and 60 dB above the three-tone average.

Using TDH39 headphones in acoustic cabin, words and noises were emitted ipsilaterally and only hits were considered, that is, words correctly repeated. This test used 120 disyllable words, 60 for each ear. These were emitted randomly at a fixed intensity of 40dB above the three-tone average. Noise was used at intensities of 40, 50 and 60 dB above the three-tone average, that is, 20 words per intensity were presented. The responses characterized as distortions were stored in a databank and will be analyzed in future studies. If the participant did not respond or responded incorrectly, the word was repeated once again after the following word on the list was presented.

The sample studied was composed of 24 female volunteers. The age of control group subjects varied from 28.5 to 39.5 years, with mean age of 32.1 years and standard deviation of 9.8. In the second group, the age range was between 27.1 and 40 years, with mean age o 33.1 years and standard deviation of 10.8. In the group with no reflexes present, the age range was between 26.3 and 37.2 years, with mean age of 30.7 years and standard deviation of 6.5.

An evaluation of the different intensities of speech sound showed similar behavior to that of the means presented. Thus, it was decided to only show these mean results.

The results of this new study, as illustrated in graph 1, show a slightly higher mean percentage of hits for the control group (normal hearers with reflexes up to 90 dBSPL), as compared to Group 2 (normal hearers with reflexes between 95 and 110 dBSPL). However, the Mann Whitney U test did not reveal statistically significant differences (p=0.28). However, when the two aforementioned groups were compared (Control and 2) with the group of normal hearers with no stapedius reflexes (Group 3), the difference was significant, with p-values of 0.005, 0.000 and 0.031, as shown in the following table:

Noise (+40 dB) Noise (+50 dB) Noise (+60 dB)
Mann-Whitney U 83.500 54.000 103.000
Wilcoxon W 219.500 190.000 239.000
Z -3.533 -3.639 -2.193
P value 0.005 0.000 0.031

Table 1.

P-value for noise intensities

Figure 3.

Mean percentage of hits per group in the speech discrimination test with noise, irrespective of ear or noise intensity used.

Thus, enhanced discrimination ability can be observed at all intensities, using speech noise, including at 60 dB, for the group with reflexes present, regardless of intensity, as compared to Group 3, with no reflexes present. That is, a significant difference occurs only when we compare groups of individuals without acoustic reflex in discriminating sounds in the presence of speech noise (Andrade, 2011).

Advertisement

5. Speech tests

Based on the previously described studies, it is important to point out that word recognition tests are very important in audiological diagnosis. The audiological battery is considered incomplete without measures of speech recognition.

The ability to understand speech is one of the most important measurable aspects of human auditory function. Logoaudiometry is a means to evaluate speech recognition under adequately controlled conditions (Penrod, 1999). The tests used to measure auditory performance of individuals performing speech recognition tasks use isolated stimuli, monosyllable and disyllable words being the most widely used (Lacerda, 1976).

Speech recognition is accompanied by a combination of acoustic, linguistic, semantic and circumstantial cues (Gama, 1994). However, when an excess of some of these cues are heard under favorable conditions, they can be disregarded. For message transmission to be effective, there must be redundancy of the acoustic cues that the hearer draws on, according to the situation and context of the communication. This is what occurs, for example, with conversations in noisy environments.

In audiological practice, it is common to find subjects with the same degree and configuration of sensorineural auditory loss who exhibit substantially different skills with respect to speech perception. There is a relatively weak relationship between tone auditory thresholds and speech intelligibility for individuals with sensorineural auditory loss. Factors other than auditory sensitivity likely interfere in speech perception (Yoshioka, 1980).

The following have been offered to explain the difficulty in understanding speech in noise, for patients with sensorineural auditory loss: noise, which has a masking function; loss of binaural integration, which increases the signal-to-noise ratio by 3 dB or more; difficulties in temporal resolution and frequencies; decrease in the dynamic field of hearing and the effect of masking low frequency energy (vowels) on middle and high frequency threshold (consonants). The negative influence of noise may be due to the following factor: the monaural entry to the auditory system does not allow the noise reduction processing possible in a binaural auditory system (Almeida, 2003).

Most individuals with auditory loss at high frequencies (above 3000 Hz) have little or no difficulty in understanding speech in silent environments, since these situations contain a series of excess cues that they can use to understand speech. However, in noisy environments or under adverse conditions, such as when speech is distorted, the subject may have innumerable difficulties regarding speech intelligibility, given that the number of cues falls drastically, leading to the use of only the cues available in the particular situation.

Thus, it is extremely important to study hearing performance less favorable conditions, such as simulating conversation in an environment where several individuals are talking at the same time and determining which processes interfere in the speech perception of these subjects. This justifies the concern about not only measuring speech recognition capacity in situations of acoustic isolation, in which concurrent stimuli are under control, but also in situations resembling real life (Schochat, 1994).

Two types of noise are recommended in the evaluation of speech perception: competitive speech noise and environmental noise, with competitive speech noise exerting a more significant effect on speech perception than environmental noise in general (Sanders, 1982).

In 2004, Caporali & Silva conducted a study to determine the effects of hearing loss and age on speech recognition in the presence of noise, using two types of noise. Three experimental groups were organized, one composed of adults without auditory alteration, another by adult subjects with hearing loss at high frequencies and a third group of elderly with similar audiometric configuration to that of the group with loss. All subjects performed speech recognition task in silence, in the presence of an amplified white noise spectrum and “cocktail party” noise, at the same signal-to-noise ratio (0 dB), in both ears. The results showed that noise interferes negatively on speech recognition in all the groups. Performance of normal-hearing subjects was better than that of the groups with hearing loss. However, the group of elderly performed worse, especially with respect to “cocktail party” noise. It was also observed that all the subjects exhibited better results in the second ear tested, showing a learning effect. Findings demonstrate that both age and hearing loss contributed to the low performance exhibited by the elderly in speech perception in the presence of noise and that “cocktail party” noise was suitable for this investigation.

Advertisement

6. Noise in the work environment

It is well known that physical and mental well being is important for good performance, in both professional and social activities. Many measures have been taken to provide better occupational conditions to workers; however, there are a number of aspects that are not recognized or valued.

It is very important to carry out research to clarify the extra auditory effects of noise on human beings, increase concern and efforts to eliminate this risk agent and adopt effective preventive and curative measures that provide better quality of life to workers.

Hearing loss may result if individuals remain in high noise environments. Given that it occurs slowly and gradually, prevention has not been given proper attention (Ayres & Corrêa, 2011). Recruitment, buzzing and poor speech signal discrimination are also observed in unfavorable environmental conditions. In addition to these auditory problems, other disorders caused by elevated noise may occur, possibly affecting other organs in the body, provoking problems such as: headaches; digestive disorders; restless sleep; lack of sleep; impaired attention and concentration; buzzing in the ears or head; vertigo and loss of balance; cardiac and hormonal alterations; anxiety, nervousness and increased aggressiveness (Kwitko, 2001).

In a retrospective study, Miller (1972) observed that groups of workers with different periods of exposure to noise (in years) showed greater losses at 4000 Hz. These initial alterations would not be detected by individuals themselves. With an increase in exposure time, other frequencies would be involved (500 Hz to 3000 Hz) with damaging consequences for speech reception.

The physical working environment must be well planned, obeying health and safety norms, have adequate illumination and space and be acoustically treated, which does not always occur. In most companies, workers are exposed to an unfavorable acoustic environment owing to background noise generated by different sound sources, such as air conditioning, conversation, movement of people, equipment and non-acoustically treated room, promoting sound reverberation.

For workers with hearing loss, or even for those with normal hearing, attempting to avoid background noise in the work environment is undeniably challenging and often frustrating. Symptoms related to noise exposure include anger, anxiety, irritability, emotional stress, lower morale and motivation, distraction, mental fatigue and sleep disturbances (Kryter, 1971). The performance of tasks may also be significantly affected by the presence of sound. Several studies suggest that sound reduces overall precision instead of total amount of work, and is more apt to affect the performance of complex tasks rather than simple ones (Miller, 1974).

The absence of acoustic reflexes is another negative factor in noisy work environments, given that the speech sound discrimination capacity of individuals with this impairment, added to all the previously described problems, is more affected. It is important that more studies correlating the aforementioned variables be conducted in order to contribute to improved working environments for individuals with abnormal acoustic reflex.

Advertisement

7. Conclusion

Given the recognized importance of acoustic reflex for communication, new studies should make a thorough investigation of the nuances of this relationship, in order to develop new technologies that allow individuals to communicate adequately in noisy work environments, thereby avoiding accidents and ultimately improving their quality of life.

Advertisement

Acknowledgments

We are grateful to Professors of UNCISAL for auditory evaluating of the patients and we thank all students of the Acoustic Instrumentation Laboratory.

References

  1. 1. ALMEIDA,K. & IORIO, M.C.M. 2003 Próteses Auditivas. Fundamentos téoricos e Aplicações clínicas., Lovise, São Paulo, Brazil.
  2. 2. ANDRADE, K. C.; CALDAS NETO, S.; CAMBOIM, E. D.; SOARES, I. A.; VELERIUS, M. & MENEZES, P. L. 2011 The importance of acoustic reflex for communication. American journal of Otolaringology. In press.
  3. 3. AYRES, O. D. & CORRÊA, P. A. J. 2001 Manual de Prevenção de Acidentes do Trabalho. Ed. Atlas S.A., São Paulo, Brazil.
  4. 4. BORG, E. 1973 On the neuronal organization of the acoustic middle ear reflex. A physiological and anatomical study brain research. 49 101 123 .
  5. 5. BUJOSA, G. C.Impedanciometría 1978 In: VIÑALS, R. P. Progresos em otorrinolaringología. Salvat, Barcelona, Spain, 45 54 .
  6. 6. CAPORALI, S. A. & SILVA, J. A. 2004 Reconhecimento de fala no ruído em jovens e idosos com perda auditiva. Rev. Bras. Otorrinolaringol. 70 4 São Paulo, Brazil.
  7. 7. CARVALLO, R. M. M. & ALBERNAZ, M. L. P. 1997 Reflexos acústicos em lactentes. Acta Awho, 16 3 103 108 .
  8. 8. COLLETI, V.; FIORINO, F.G., VERLATO, M.D. & CARNER M. 1992 Acoustic Reflex in Frequency Seletivity: Brain Stem Auditory Evoked Response and Speech Discrimination. In: KATZ, K., STECHER, N.A. & HENDERSON, D. Eds. Central Auditory Processing: A Transdisciplinary View. St. Louis: Mosby Year Book;
  9. 9. DOWNS, D. W. & CRUM, M. A. 1980 The hyperative acoustic reflex- four case studies. Arch Otolaryngol. 106 401 404 .
  10. 10. FERREIRA A. B. 1975 Novo dicionário da língua portuguesa. Rio de Janeiro: Nova Fronteira.
  11. 11. GAMA M. R. 1994 Percepção da fala: uma proposta de avaliação qualitativa. São Paulo: Pancast.
  12. 12. HANDEL, S. The physiology of listening. (1993) In: HANDEL, S. Listening, An introduction to the perception of auditory events. Cambridge, Massachusetts: First MIT press paperback edition. Chap. 12, pp. 467 e 468.
  13. 13. KAWASE, T.; HIDAKA, H. & TAKASAKA, T. 1997 Frequency summation observed in the human acoustic reflex. Hear. Res., Amsterdam, 108 n. 1-2, 37 45 .
  14. 14. KRYTER K. D. 1971 The effects of noise on man. London and New York: Academic Press. 633 pp.
  15. 15. KWITKO A. 2001 Coletânea: PAIR, PAIRO, RUÍDO, EPI, EPC, PCA, CAT, Perícias.Ed. LTr, São Paulo.
  16. 16. LACERDA A. P. 1976 Audiologia Clínica. Rio de Janeiro: Guanabara Koogan
  17. 17. LIDÉN, G.; PETERSON, J. L. & HARFORD, E. R. 1970 Simultaneous recording of changes in relative impedance and air pressure during acoustic and nonacoustic elicitation of the middle-ear reflexes. Acta Otolaryngol., 263 208 217 .
  18. 18. LINARES. A. E. & CARVALLO, R. M. M. 2004 Latência do reflexo acústico em crianças com alteração do processamento auditivo. Arq. Otorrinolaringol., 8 n. 1, 11 18 .
  19. 19. LOPES FILHO, O. C. & SCHIEVANO, S. R. 1975 Predição do limiar auditivo por meio da impedanciometria. Rev. Bras. de Otorrinolaringol., 41 238 246 .
  20. 20. MENEGUELLO, J.; DOMENICO, M. L. D.; COSTA, M. C. M.; LEONHARDT, F. D.; BARBOSA, L. H. & PEREIRA, L. D. 2001 Ocorrência de reflexo acústico alterado em desordens do processamento auditivo. Rev.Bras. Otorrinolaringol. São Paulo, Brazil, 67 45 48 .
  21. 21. MENEZES, P. L., NETO, S. C. & MOTTA, M. 2005 Biofísica da Audição. Edition 1, Lovise, São Paulo, Brazil.
  22. 22. METZ O. 1952 Limiar da contração reflexa dos músculos da orelha média e recrutamento de volume. Arch Otolaryng (Chic.), 55 536 543 .
  23. 23. MILLER J. D. 1972 Effects of noise on the quality of human life. Central Institute for the Deaf, St. Louis (Special Contract Report Prepared for the Environmental Agency, Washington, D.C.) Occupational Exposure to Noise, NIOSH, USA.
  24. 24. MILLER J. D. 1974 Effects of noise on people. Journal of the Acoustical Society of America, 56 729 769 .
  25. 25. NEPOMUCENO L. 1994 Elementos de acústica física e psicoacústica. Edgard Blucher, São Paulo, Brazil.
  26. 26. NORTHERN, J.L.; GABBARD, S. A. & KINDER, D. L. 1989 Tratado de audiologia clínica. Manole, São Paulo, Brazil, 483 503 .
  27. 27. NORTHERN, J. L. & DOWNS, M. P. 1989 Audição em crianças. Manole, São Paulo, Brazil.
  28. 28. PENROD J. P. Testes de Discriminação Vocal. 1999 In: KATZ, J. Tratado de Audiologia Clínica. Manole, 4a Ed. São Paulo, Brazil. 146 162 .
  29. 29. ROBINETTE, M. S. & BREY, R.H. 1978 Influence of alcohol on the acoustic reflex and temporary threshold shift. Arch Otolaryngol., 104 31 37 .
  30. 30. SANDERS D. A. 1982 Aural rehabilitation. Prentice Hall, New Jersey, USA.
  31. 31. SCHOCHAT E. 1994 Percepção de fala: Presbiacusia e perda auditiva induzida pelo ruído. [Tese de Doutorado]. FFLCH/USP, São Paulo, Brazil.
  32. 32. SIMMONS F. B. 1994 Perceptual theories of middle ear muscle function. Ann. Otol. Rhinol. Laryngol., Saint Louis, 73 n. 1, 724 739 .
  33. 33. SOUZA D. S. 2004 Instrumentos de gestão da poluição sonora para a sustentabilidade das cidades brasileiras [Tese]. UFRJ, Rio de Janeiro, Brazil.
  34. 34. TIENSOLI, L. O.; GOULART, L. M. H. F; RESENDE, L.M. & COLOSIMO, E.A. 2007 Triagem auditiva em hospital público de Belo Horizonte, Minas Gerais, Brasil: deficiência auditiva e seus fatores de risco em neonatos e lactentes. Cad. Saúde Pública, Rio de janeiro, Brazil, 23 n. 6, 1431 1441 .
  35. 35. WORMALD, P. J.; ROGERS, C. & GATEHOUSE, S. 1995 Speech discrimination in patients with Bell’s palsy and a paralysed stapedius muscle. Clin. Otolaryngol., Oxford, USA, 20 n. 1, 59 62 .
  36. 36. YOSHIOKA, P. & THORNTON; A.R. 1980 Predicting speech discrimination from audiometric thresholds. J Speech Hear Res; 23 814 827 .
  37. 37. ZEMLIN W. R. 2000 Princípios de Anatomia e Fisiologia em Fonoaudiologia. Artmed, Porto Alegre, Brazil. chap. 6, 433 529 .

Written By

Kelly Cristina Lira de Andrade, Silvio Caldas Neto and Pedro de Lemos Menezes

Submitted: 21 October 2010 Published: 13 June 2011