Open access peer-reviewed chapter

Electrical Stimulation of the Auditory System

Written By

Patrick J. Boyle

Submitted: 15 October 2018 Reviewed: 18 February 2019 Published: 31 May 2019

DOI: 10.5772/intechopen.85285

From the Edited Volume

The Human Auditory System - Basic Features and Updates on Audiological Diagnosis and Therapy

Edited by Stavros Hatzopoulos, Andrea Ciorba and Piotr H. Skarzynski

Chapter metrics overview

1,712 Chapter Downloads

View Full Metrics

Abstract

In many healthcare systems electrical stimulation of the human auditory system, using cochlear implants, is a common treatment for severe to profound deafness. This chapter will describe how electrical stimulation manages to compensate for sensory-neural hearing loss by bypassing the damaged cochlea. The challenges involved in the design and application of cochlear implants will be outlined, including the programming of clinical systems to suit the needs of implanted patients. Today’s variety of patient will be reviewed: unilaterally and bilaterally implanted, bimodal users of a cochlear implant as well as a contralateral hearing aid, CROS device users having either asymmetrical hearing loss or single-sided deafness. Alternative devices such as auditory brainstem implants will be described, and additionally the more experimental auditory mid-brain implants and intraneural stimulation approaches. Research that is likely to bring medium term benefits to the clinical application of cochlear implants will also be described.

Keywords

  • cochlear implant
  • electrical
  • stimulation
  • prosthesis

1. From the beginning to current practice

Electrical stimulation of the human auditory system is generally traced back to the pioneering experiments of Alessandro Volta, inventor of the battery. When Volta applied 50 volts to his own head, he reported hearing an unpleasant boiling sound [1]. However, the forerunner of a modern CI system is just over 60 years old: opportunistic stimulation of the auditory nerve [2] of a bilaterally deaf patient receiving a facial nerve graft. During the two decades following this work, various clinical studies [3, 4, 5, 6, 7, 8, 9, 10] saw the implantation of single and then multi-channel cochlear implant (CI) systems in people suffering profound deafness. While many of these pioneers suffered ridicule at the hands of the mainstream scientific community, clinical considerations prevailed. The early devices that were produced in academic institutions were transferred to commercial organizations, these often building on prior medical device experience, for example experience gained in the pace maker field.

Today over half a million people, from babies under 6 months of age to adults in their late 90s, have been implanted with a CI. While it can be argued that the CI is the most successful medical device ever created, the outcomes are still highly variable (Figure 1). In the best of cases, CI users can make fluent use of a telephone, understand speech in adverse listening conditions where there is considerable competing noise and reverberation, hence enjoying independence spanning social lives and careers that would have been unimaginable without their CI device. Even where speech understanding is limited, a release from the isolation of deafness through access to environmental sounds, a reduction in the level of tinnitus and support of lip-reading with a reduction in the effort required for oral communication are all worthwhile benefits from use of a CI. It should also be noted that in many cases those most satisfied with their implant are not those who receive the highest scores on standardized tests of speech understanding.

Figure 1.

Percent correct scores on the CNC word test ranked from poorest to best for 113 cochlear implant users showing a large variation in outcome reproduced from Holden et al. [79].

The following sections will describe how electrical stimulation of the auditory system is achieved, with the main focus being on CI systems. The factors that influence outcome, so far as they are known, will be described, along with the challenges in delivering clinical service, both today and into the future. With the future in mind the major research topics that are currently being addressed will be outlined.

Advertisement

2. Overview of a cochlear implant system

Figure 2 shows the various components common to all of today’s clinically applied cochlear implant systems. Sound is typically collected from microphones housed on a behind the ear (BTE) sound processor. The sound is first “cleaned” to remove noise and then processed to create the stimulation patterns destined for the implanted electrode array. Except in the case of one-piece processors, a lead connects the sound processor output to a radio frequency (RF) transmitter coil located above and behind the ear. The external coil is held in place over the implant’s receiver coil through a pair of magnets: one external and one within the surgically implanted device under the skin. This arrangement supports reliable communication across the skin through the use of RF based telemetry. The RF signal provides both power for the implant’s electronics and the information needed to produce electrical stimulation. Hence the implant consists of: its receiver coil, a hermetic package containing electronic circuits and an electrode lead assembly connecting to the electrode array that is placed inside the cochlea (Figure 3). In some of today’s CI systems the sound processor and RF coil are a single component held in place by the magnet but having no wire or BTE part. This provides some esthetic advantage but may fall off more easily and compromises sound collection.

Figure 2.

The components of a behind the ear (BTE) model of cochlear implant showing (1) the T-mic placed in the external ear canal, (2) BTE sound processor, (3) radio frequency transmitting headpiece, (4) the implant body, (5) intra-cochlear electrode array and (6) the auditory nerve.

Figure 3.

A cross-section through the spirally-shaped cochlea showing the various compartments, including the scala tympani with a mid-scala located electrode array.

Additionally today’s implants have the ability to make both physical and physiological measurements, using back-telemetry, to transmit these data to the sound processor. Through the use of wireless technology, information can be relayed to and from a host of devices: smartphones, tablets, laptops, remote microphones or other listening aids. Such connectivity leaves a CI user well placed to use many consumer devices to enhance their communication and support maintenance of their implant system.

Advertisement

3. Electrical stimulation principles

In the earlier chapters of this book the auditory system has been described in some detail, including pathology that can result in the most debilitating degrees of hearing loss: severe to profound deafness. Fortunately, electrical stimulation can be delivered without external, middle or indeed even an inner ear. However, in the large majority of clinical cases the auditory nerve is intact and can be accessed via a very poorly or non-functioning cochlea: it being this damage that the CI bypasses. The operating principle of a CI is that small electrical currents are able to initiate activity on the auditory nerve that crudely mimics the activity produced by a normally functioning cochlea. Taking advantage of the cochlea’s tonotopic organization, currents representing high frequency sounds are delivered to the base, while currents representing lower frequency sounds are delivered to more apical cochlear locations. This is achieved through the use of an electrode array containing multiple separate electrode contacts placed along the scala tympani (Figure 4). The number of intra-cochlear electrode contacts in clinical service varies by manufacturer between 12 and 24. In addition, either the shorting of adjacent contacts, or simultaneous delivery of synchronized stimulation patterns on multiple electrode contacts, seeks to increase the number of stimulation sites available [11, 12, 13].

Figure 4.

View of how an electrode array will be positioned within the scala tympani of the cochlea, here with the electrode contacts facing towards the modiolar wall behind with the spiral ganglion cell bodies are located.

In Figure 4 an electrode array is shown placed in the scala tympani. Here the exposed electrode surface, from which stimulation current is delivered, faces towards the modiolar wall, behind which the auditory nerve’s spiral ganglion cell bodies are located in Rosenthal’s canal. The remainder of the array is composed of a soft silicone that supports the contacts and the insulated wires connecting the electrode contacts to the implant’s electronics. A modiolar facing contact surface orients the electrical stimulation towards the primary stimulation targets, the spiral ganglion cell bodies. Since the perilymph fluid in the scala tympani is electrically conductive, it allows current to flow through the cochlea and achieve stimulation of neural elements. A downside is that current also tends to spread along the scala, rather than addressing only the area local to the electrode contact where we would like it to act. If peripheral processes still remain in the cochlea, extending from the cell bodies to the organ of Corti, then these may also be targets for electrical stimulation. Unfortunately it is not currently possible to accurately know the status of any individual’s cochlea. It appears reasonable to assume that a great deal of the variability illustrated in Figure 1 comes from variations in the health of those individuals’ cochleae, this variation being part dependent on environment and disease and the individual’s particular physiology, as well as how any one individual’s immune system reacts to the insertion and presence of the electrode array itself.

The most common type of stimulation paradigm used in CIs today is monopolar stimulation (Figure 5a). Here the implant introduces current into the cochlea via a relatively small electrode contact. A typical surface area might be 0.2 mm2. The density of current close to the electrode contact introduces a higher probability of activating elements close to the contact, the probability decreasing for elements at increasing distance. Monopolar stimulation current is returned to the implant using a distant extra-cochlear electrode that has at least 10 times the intra-cochlear electrode contact’s surface area. This keeps the returning current density low, avoiding stimulation at this remote site. Typically there are two return electrodes on a cochlear implant, in case anything goes wrong with one of them. One is placed on the body of the implant while the other is on a separate flying lead, or on the electrode lead assembly but located outside of the cochlea. In Figure 6 an implant can be seen with its various component parts indicated.

Figure 5.

Electrical stimulation configurations: (a) monopolar where current is returned to a large distant extra-cochlear return electrode, (b) bipolar where current flows between adjacent intra-cochlear electrodes and (c) tripolar where current is returned by two adjacent contacts.

Figure 6.

A cochlear implant showing its various components. Note the two ground or return contacts, one on the case body and one on the electrode lead assembly.

Alternative stimulation paradigms are sometimes used, but mainly for research. Figure 5b shows a bipolar stimulation paradigm. Here two intra-cochlear electrode contacts, separated by around 1 mm, operate as a pair. Stimulation is introduced by one contact and returned by the other. This in theory restricts stimulation to a small part of the cochlea, so should help with the spread of current mentioned above. However, in practice current introduced by one contact is returned to the other without activating enough neural tissue to create a loud enough hearing sensation. Hence, it is often necessary to form a bipolar pair using contacts that are not adjacent but for example are spaced 2 mm or more apart. In addition, the arrangement of contacts along the cochlea means that there will be a plane of zero field between the contacts, leading to a need to use higher currents and thus produce a wider spread in stimulation. In Figure 5c tripolar stimulation is shown. Now a group of three electrode contacts are used together. Stimulation is introduced via the middle contact and ideally half returned via the adjacent contact on either side. This avoids the zero potential plane problem of bipolar stimulation and theoretically provides a tighter containment of stimulation. Again, in practice there is a need to recruit a given number of neurones to signal sufficient loudness and this means increasing the tripolar stimulation current on the middle contact. Eventually the current on the return electrodes will become high enough to cause stimulation, resulting in a wider spread of current than intended. In many cases it is not even possible to increase the current far enough to achieve sufficient loudness for a tripolar configuration. In such cases some of the current has to be returned to the remote extra-cochlear return electrode, a configuration referred to a partial tripolar. This even further increases the current spread. So, with these practical limitations in mind it is not difficult to see why monopolar is universally used as the default stimulation paradigm, despite the apparent large current spread that this entails.

3.1 Technical considerations

In the interests of simplicity some of the more technical aspects of electrical stimulation have been avoided in the text above, allowing focus on the broader application. In this section some of these more technical issues will be discussed. Where a reader is not interested in technical detail this section can be ignored.

Ohm’s Law states that the electrical voltage difference required to drive a given current is directly proportional to the resistance through which the current has to flow. With changes in the resistance, or more generally the impedance if we consider frequency effects, a voltage source would lead to uncontrolled changes in the current being output. As described below this could lead to uncontrolled changes in loudness over time. Most of today’s CI systems deliver electrical stimulation through one or more current sources. As the name implies, this circuit attempts to deliver the current requested of it regardless of how much electrical resistance is offered by the body. A current source is said to be in compliance when it is delivering the current requested of it. Given a finite amount of voltage being available within an implant, for example 8 volts, there will be a maximum impedance into which the implant’s maximum output current can be delivered. With a typical stated maximum output current of 2000 μA, the maximum impedance for which this could be delivered will be 4000 Ohms (8 volts divided by 2000 μA, or 8/2 × 10−3 = 4000). For higher impedances the maximum stated output current will not be available. For lower impedances the implant will be limited to its maximum output current value, ensuring safe operation.

Since CI systems are designed to provide stimulation for essentially all waking hours, day after day over decades, their output must not damage the neural tissue that they are intended to stimulate. One obvious source of damage is the delivery of direct current (DC). If a DC current is applied to the body it will result a process called electrolysis. Here there will be a continuous transport of ions, charged atomic particles, leading to dissolution of the platinum electrode contact and destruction of the cochlear tissue: obviously a catastrophic situation. Research in animals indicates that even very small amounts of DC, 400 nA, is enough to cause tissue damage [14]. Several mechanisms are used in a CI to prevent the delivery of DC. Firstly, a balanced stimulus waveform is used, almost always a symmetrical pulse having two complementary phases (Figure 7a), although so long as the two phases contain an equal and opposite area they need not be symmetrical (Figure 7b). At the simplest interpretation, the first phase, referred to as cathodic, will depolarize neurones hence producing the electrical stimulation that we seek to achieve. The second, anodic, phase balances the stimulation resulting in no nett current being delivered to the body, thereby avoiding DC. Even with very careful design, there is likely to be some small imbalance between the two phases. To account for any in-balance, following delivery of a stimulation pulse the electrode contact is connected to ground, ideally removing any residual DC. Finally, an electronic component called a capacitor is placed in series with the stimulating electrode contact. A capacitor does not allow DC to pass, so offers yet more protection in case some fault in the electronics interferes with either of the previous two protection mechanisms. Together these mechanisms appear successful in avoiding the delivery of DC. Devices do fail, particularly in the pediatric population, with reimplantation rates over tens of years being reported at 8% from the well-established Sydney clinic [15]. Typically half of these failures are medical issues and half device failures. However, in virtually all cases it is possible to re-implant the patient, with outcomes almost always being as good as those obtained when the original device was working well [16].

Figure 7.

Stimulation waveforms with balanced cathodic and anodic phases may have either symmetrical phases (a), or asymmetrical phases (b) where the area of each phases is identical.

While we speak about stimulation current it is really the electrical charge that is at issue. Charge is simply the product of current times time and has units of Coulombs, C. Electro-chemical considerations mean that an electrode has a maximum charge injection capacity, such that a given size and material will only be able to handle a given charge limit in a reversible way, so that all of the charge injected in one phase can be removed in the second phase [17]. This is necessary to avoid the DC as discussed above. A conservative value for the maximum charge density, typically 30 μC/cm2 [18], taking account of the electrode dimensions, is programmed into the implant’s controlling software, ensuring that this limit is never exceeded. Animal experiments confirm that chronic stimulation with higher charge densities, for example 400 μC/cm2, results in the dissolution of platinum but interestingly not to the loss of auditory neurones [19]. The loudness sensation produced by electrical stimulation is related to the amount of charge delivered in one phase of the stimulation waveform. There is also an effect of the rate at which stimulation pulses are delivered. However, for the stimulation rates used in clinical practice, typically 500–2000 pulses per second per channel (ppps/ch) the effect of rate is quite small and largely ignored.

Stimulation current flows through cochlear tissue as a result of voltage differences developed along the current’s path. How these voltage differences are arranged along the length of a neurone’s peripheral or medial process, or indeed across the cell body, determines which neurones depolarize, leading to action potentials being generated. The action potential may propagate to the medial synapse with the cochlear nucleus and hence initiate activity on the auditory pathway, leading to a sense of hearing being detected in the brain. Rather than stimulating a single neuron, typically hundreds, or even thousands, of neurones in a region of the cochlea will be addressed by a single electrode contact. These patterns of stimulation are interpreted as sound input by the higher levels of the auditory system, leading to the sense of electrical hearing. The next section will describe how sounds detected by the CI system’s microphone will result in the generation of electrical stimulation patterns.

Advertisement

4. Operation of the cochlear implant system

The main cochlear implant system functions are shown schematically in Figure 8. Sound from the microphone is compressed by a single-channel automatic gain control (AGC) system. Compression ratios in CI systems tend to be substantially higher than those in acoustic hearing instruments: six to infinity, compared to two to three respectively. This reflects both the small electrical dynamic range of typically 10 dB [20] and the exponential like increase in loudness found for electrical hearing [21]. Both considerations require tight control of the stimulation current’s amplitude to avoid discomfort. Research with different implant types shows a consistent advantage for slow-acting AGC, the benefit being a reduced compression of the information rich temporal modulations of speech [22, 23, 24], as well as a reduced co-modulation effect [25] associated with the single channel AGC.

Figure 8.

A schematic of the sound processor system where sound is collected by the microphone, compressed by an automatic gain control, broken into discrete frequency channels, which have their energy assessed and mapped to the user’s requirements. This information is then combined into a digital stream, transmitted by radio frequency to the implant where stimulation currents are generated.

Following AGC, the sound is broken into a number of frequency channels, this number varying between 12 and 24 channels, reflecting the number of intra-cochlear electrode contacts available in the implant model. In Figure 8 only four channels are used to illustrate the principle. Today a Fast Fourier Transform (FFT) algorithm is often used to separate the incoming sound into discrete frequency channels. The amount of energy in each channel is then estimated by a rectification and low-pass filtering process. While the average energy is calculated over a period of perhaps 10 ms (milliseconds) or more, stimulation pulses will be delivered much more rapidly, typically once every millisecond. Hence calculations will be made that overlap in time in an attempt to follow the changes in speech energy over time. Next the acoustic energy in each channel is mapped to an electrical current amplitude that takes account of the CI user’s sensitivity to electrical stimulation. The goal is to use smaller currents that barely produce a perception of electrical stimulation to represent low-intensity acoustic activity and larger currents that are perceived as loud to represent very intense acoustic events. This needs to be managed separately for each channel, resulting in the continuous output of a stream of stimulation amplitudes for each channel. As shown schematically, these amplitudes are then combined together for transmission by the RF signal across the skin to the implant. Electronics inside the implant extract the digitally transmitted amplitudes, convert them to analogue values and then drive the implant’s current source(s), resulting in stimulating currents being delivered by the intra-cochlear electrode contacts. For virtually all of today’s clinical systems only one channel will be stimulated at a time. This approach avoids the channel interactions that would occur were channels presented simultaneously within the conductive scala tympani [26]. The disadvantage of this approach can be seen in Figure 9 where a channel is only updated during its own time period, therefore, must wait until all the other channels have been updated until new information can be transmitted. Deliberately, very brief current pulses each of around 40–50 μs duration (20–25 μs/phase) are used, so that it is still possible to update each channel rapidly enough to keep up with the changes in acoustic energy over time. This often means stimulation at more than 1000 pps/ch. Such an approach generally leads to higher levels of speech understanding than where simultaneous stimulation is delivered [27, 28].

Figure 9.

An illustration of now non-simultaneous waveforms delivers information for each channel. Once a channel has been stimulated no more information may be delivered until the other channels have been updated.

4.1 Sound coding strategies

Over the years the sound coding strategy, a software algorithm that relates audio from the sound processor microphones to the electrical patterns appearing at the electrode contacts, has changed. Initially it was believed that the damaged auditory system was not capable of transmitting much information, hence the most useful information was extracted from the speech and directly coded on sets of electrodes. For example an early feature extraction strategy F0F1F2 [29] extracted the first two formants of speech, F1 and F2, from which it is possible to estimate the vowel being articulated. Each formant range had a set of electrode contacts allocated, such that higher or lower frequencies for each formant lead to stimulation on more basal or more apical electrode contacts in that formant’s electrode set. The rate at which pulses were delivered was related to the fundamental frequency (Fo) driving the vocal tract, leading to the strategies name. Such a strategy supported only very modest levels of speech understanding, around 8% correct for monosyllabic words presented in quiet [30]. The information extracted was limited to begin with and further reduced through errors generated in real life listening situations where background noise, reverberation and intensity and frequency response variations led to the algorithm making mistakes in both the extraction of formant frequencies and in the estimation of Fo.

It was eventually recognized that the brain was better at extracting information than the feature extraction algorithms and hence “whole-speech strategies” replaced feature extraction. Today’s sound coding strategies simply average the energy in each channel’s frequency range and generate levels of stimulation that represent this. In some cases a so-called n-or-m strategy will work out which subset (n) channels from the total (m) number available have the highest energy and then only stimulate this reduced set. Refinements to this may neglect adjacent channels on the basis that stimulating both will not add anything, so select a more distant lower amplitude electrode to transmit more information [31].

4.2 Neural population

Stimulation delivered by a CI system will result in the depolarization of neural elements, resulting in action potentials being generated that propagate to the next stage of the auditory system: the cochlear nucleus. With reference to the schematic of Figure 8, there is a population of spiral ganglion neurones associated with each electrode and hence each frequency channel of the CI system. As mentioned above, a channel’s stimulation current will need to recruit a certain population of neurones whose firing indicates to the brain the amount of activity in a particular frequency range. Ideally, there will be a sufficient local neural population such that progressively increasing stimulation current initiates an appropriate number of action potentials, so that the brain correctly perceives the amount of acoustic activity in the channel’s frequency range.

Unfortunately a discrete neural population for each channel as shown in Figure 10a is not always available. In Figure 10b only a reduced neural population is available for each channel. Hence, when there is a lot of activity in one channel, requiring recruitment of a full population of neurones, these are not available locally. It is still possible to increase the stimulation current, spread the electrical field further away from the electrode and depolarize neurones that should really be associated with another channel. While this will satisfy the perception of loudness, it generates channel interaction so that we are no longer able to deliver frequency specific information to a discrete part of the cochlea. The perception will be of a blurred or fuzzy sound, particularly a problem when trying to listen to speech in the presence of competing noise.

Figure 10.

A schematic representation of three different neural populations: (a) a full population exists for each channel, (b) a depleted population results in channel interaction and (c) a dead region requires recruitment from the populations rightly belonging to adjacent channels.

An alternative situation is shown in Figure 10c where most electrodes have a sufficient local neural population but one electrode is located in a so-called dead region [32]. When electrode 2 is stimulated it can only recruit neurones from the population belonging to electrodes either side of it, delivering information about channel 2’s frequency region to other parts of the cochlea, spreading stimulation widely and interfering with the otherwise discrete frequency information being delivered by the neighboring channels.

Unfortunately, it is not currently possible to determine what the number and distribution of neural elements is for any individual. The literature is not always helpful in this area. As shown in Figure 1, there is great variability in outcome for the identification of monosyllabic words. Since this task involves little top-down processing, much of the variability in outcome must come from the electro-neural interface. Beyond speech understanding, examining the ability to discriminate adjacent electrodes, or intra-electrode stimulation sites [33], also showed both great variability between subjects and across the electrode array of individual subjects. This task, having no confound with cognitive processes related to speech understanding, further confirms the presence of peripheral variability and its likely contribution to variations in outcome.

It is unclear to what degree a loss of spiral ganglion cells (SGC) in humans will follow, even after years of severe to profound deafness. Histological studies of humans who had used a cochlear implant sometimes show a reasonable correlation between CNC word score and SGC count: for example R = 0.62 [34], R = 0.9 [35] but for a small group of only 6. However, the variability is such that the same SGC count can show variations of between 30% and 75% for CNC words, or the same CNC word score can be associated with 3000 or 18,000 SGCs. Examining the threshold current for detecting electrical stimulation in a group of 130 lateral wall electrode array users [36] showed significant differences between four groups: the increase in group mean threshold being associated with a reduction in monosyllabic word score. This works suggests that a higher SGC population (lower electrical threshold) is associated with better speech understanding.

The literature listed above indicates that there is a relationship between the number of spiral ganglion cells and the ability to identify monosyllabic words when using a cochlear implant. Contributions to speech understanding may also come from a large number of additional factors, some of which include: the distribution of SGCs, angular insertion of the electrode array, distance of the electrode contacts from the modiolar wall, presence or absence of peripheral processes, fibrous sheath formation and intrusion of new bone into the cochlea. How well a given implant user has had the parameters of their sound processor set, commonly referred to as their program, is another variable that we will examine next.

4.3 Programming the cochlear implant

As has been explained above, the small electrical dynamic range available to a CI user makes it necessary to carefully adjust the stimulation parameters to suit the requirements of each individual recipient. The most important adjustment is the amount of stimulation that will be delivered in response to acoustic activity. This must be done for each of the CI’s separate channels. Each channel has two primary parameters that control its output. One will be typically called a most comfortable level, shortened to either M-Level or C-Level. The other is a threshold control, referred to as T-Level. The main CI manufacturers use these parameters slightly differently but to a good approximation T-Level sets the minimum stimulation level that the implant will deliver and M-Level will set the maximum stimulation level that can be delivered for an individual recipient. The sound processor will then arrange for the amount of acoustic range that it handles, somewhere between 40 and 80 dB depending on the user’s setting and implant model, to be mapped to stimulation levels between T- and M-Level. In combination with the AGC of the system this will give the CI user access to their acoustic environment such that hearing levels of between 20 and 30 dB HL are achieved across the frequency range 250–8000 Hz. The combination of AGC and M-Level ensures that even high intensity sounds of 100 dB SPL do not produce uncomfortably loud sensations. Unlike acoustic hearing, it is generally possible to provide CI users with access to the full range of frequencies that are most important for speech understanding.

Which channels are activated is another important adjustment to make. Most audiologists are reluctant to deactivate channels, although sometimes a reduced set of channels can give a better outcome. In some cases an electrode array is not fully inserted into the cochlea, perhaps due to the cochlea being too small, or there being fibrosis tissue, or bone, that prevents a full insertion being obtained. Alternatively, electrode arrays can sometimes extrude from the cochlea [37, 38], either shortly after implantation or months to years later. In all these cases the more basal electrode contacts will need to be deactivated. Deleting electrodes from a program will lead to the frequency range being remapped across the remaining electrode contacts. There will be a coarser representation of frequency since fewer channels are now available. However, removing electrodes that are not inside the cochlea will produce a better outcome than simply leaving these electrodes active.

Beyond setting T- and M-Levels and defining an appropriate set of electrode contacts, there is sometimes adjustment made to the acoustic dynamic range mapped by the sound processor. This effectively controls the compression of acoustic sounds into the electrical dynamic range. It might seem logical to use as large an acoustic or input dynamic range (IDR) as possible, since this will maximize the range of sounds available to a CI user. However, it is the discrimination of different levels of sound in each channel that carries information. An excessively large IDR may squeeze these amplitude cues, reducing the ability of an implant recipient to understand speech. There are many parameters that can be adjusted in a CI system. However, it is common for the majority to remain at their default values. This may be through an inability to obtain user feedback, for example in young children, lack of time or knowledge on the part of the clinician, or a recommendation from the CI manufacturer.

How appropriate values are found for the T- and M-levels depends very much on the individual CI user. For a post-lingually deafened adult it is reasonably straightforward to find these. By presenting 200 ms bursts of stimulation and using a standard bracketing approach, the smallest detectable amount of stimulation for each channel can be found and this value set as the T-level. Similarly, progressively increasing the stimulation will allow an M-level to be found, the CI user often pointing to different categories on a loudness chart as the various levels of stimulation are presented. These measures can be made for each individual channel, channels can be programmed in groups of four, or only five or six channels across the electrode array measured with intermediate channels set to interpolated values.

For babies or young children and even for some adults, objective measures are often used to help set program levels. The most common measure used is the eCAP, the electrically elicited compound action potential [39]. The ability to record eCAPs is built into the fitting systems for all of today’s major CI systems. Here masker-probe or alternate-subtraction techniques [40, 41] are used to reduce the large stimulus artifact. The amplitude of the remaining physiological signal, arising from synchronized activity on the auditory nerve, is then graphed against the stimulation level. A regression line extrapolates to intersect the stimulation axis which would correspond to a zero amplitude of eCAP. The stimulation value for which this occurs is then used as a guide for setting programming levels. Avoiding stimulus artifact and allowing sufficient neural synchronization, means that much lower stimulation rates are used when measuring eCAPs than for actual everyday stimulation. The means that the absolute eCAP values can fall at various parts of an individual’s electrical range. Fortunately, it is the profile of values across the electrode array that it is important to determine. Once this is estimated a global change in level can be made to obtain appropriate loudness. In many cases the T-levels are set to 10% of the M-level since this is almost certainly not going to leave them set too high. Typically T-levels are measured at something like 25% of M-level [42]. When they can be measured and hence individually set, T-levels will tend to improve access to low intensity sounds. Often in clinical practice T-levels are set at a percentage of M-level even where they could be individually set: the additional benefit not being considered worth the additional effort needed for measurement.

Other objective measures are used to assist with programming, although less often due to these requiring additional equipment to be used in collaboration with the CI fitting system. There is a reasonable correlation between an electrically elicited stapedius reflex threshold (eSRT) and M-level [43]. Unlike eCAPs here the same stimulation rate can be used to measure eSRT as will be used in the everyday program. This simplifies the setting of levels and is partly behind why there is such a good correlation with M-level. Less commonly the electrically elicited auditory brainstem response (eABR) is used [44]. Again, eABR will require a lower stimulation rate to be used, so that the characteristic waveforms can be seen in up to 5 or 6 ms following stimulation. This tends to produce an extrapolated threshold for eABR quite high in an individual’s electrical dynamic range. As with the eCAP and eSRT measures, it is the relative levels across channels that are important, the profile then being globally adjusted to determine the M-levels that will be used in the program.

Less frequently, some statistically based approaches are used for programming. Simple so-called “flat maps” are used where the T- and M-level is the same on each channel. These are justified by the spread of monopolar stimulation recruiting neurones form a larger section of the cochlea than associated with an individual electrode contact thus tending to produce a spatial averaging. Other approaches might use a template based on the statistical average of levels previously measured for earlier CI recipients. Approaches such as FOX [45] extend this technique, recommending a sequence of programs with progressively increasing levels that are used from the very beginning. For many CI users these techniques can work quite well, although numbers of outliers will require individually tailored programs to realize their potential outcome with comfortable stimulation and reasonable access to their acoustic environment.

Plasticity in the auditory system means that over time the M-levels will usually increase. The longer term M-levels might be typically double those that can be tolerated during the initial fitting. After the first 2 months of device use, neither T- or M-levels tend to change significantly over time [46]. Change in levels is highly individual requiring the initial program levels to be revised numbers of times during the first few months of implant use. Where a second (or third) fitting session is planned within around 2 weeks of the first fitting, most of the change can already be accommodated. Looking across large numbers of adult CI users, program levels will be stable by between 3 and 9 months following first fitting. Individual practice can result in pediatric levels being more slowly increased, leading to 6–12 months being needed to see stable levels.

Advertisement

5. Different patient groups

Cochlear implants were originally designed to help those suffering bilateral profound deafness who could not benefit from acoustic hearing aids. Traditionally candidacy would have required a loss of at least 90 dB HL across all of the audiometric frequencies from 125 to 8000 Hz. Over the past 30 years we have seen, improvements in outcome (speech understanding) through better sound coding strategies and electrode arrays, improvements in esthetics as the external equipment has moved from body worn to behind the ear or single piece processors, improvements in surgery with a skin to skin operating times of well under 1 hour, as well as much smaller incisions not requiring hair shaving and at least in some cases, the preservation of residual hearing. These developments have meant that a CI can now be considered for much more than the 0.2% of the population who suffer profound bilateral sensorineural deafness [47].

It is becoming more common for ears with useful low-frequency residual hearing to receive a CI. Candidacy can now include those with severe to profound levels of hearing loss above 1000–2000 Hz, but normal to moderate hearing loss for lower frequencies [48, 49, 50]: a group sometimes referred to as suffering partial deafness. Where the residual hearing can be preserved to within 10–20 dB of the pre-operative levels, many of these recipients use a combination of electrical and acoustic stimulation (EAS) in the same ear. Most CI manufacturers now make EAS processors so that a single instrument supports both modalities, offering comfort, convenience and allowing an EAS fitting to be made using a single piece of software.

Where there is some asymmetry in hearing, recent practice has seen only the poorer ear being implanted, while an acoustic hearing instrument is fitted to the contralateral ear. This is often referred to as bimodal hearing. Dedicated hearing instruments (HI) have been developed that match the compression characteristics and sound cleaning operations between the CI and HI, as well as offering wireless sharing of microphone and control signals. Such systems offer convenience for the user and can combine the natural acoustic low-frequency sound in the HI ear, with the high frequency information supporting speech understanding in the implanted ear.

Bilateral CI provision is now the standard of care in many healthcare systems, at least for children. Receiving two implants, either simultaneously or within a few months of each other, provides the best chance for the brain to have both sides work together. Redundancy, the countering of head shadow and a fuller sense of hearing are all advantages of bilateral implantation. It tends to be only considerations of cost that prevent bilateral CIs being offered universally to all those who could benefit from them. Again with wireless technology developing rapidly, the use of algorithms that combine microphones between the two CI sound processors can offer large improvements for listening in noise when beam formers are used to attenuate noise coming from directions other than directly ahead, particularly useful then the CI user is in a one-to-one conversation in a noisy location.

Where a second CI is not available and there is no aidable hearing on the contra-lateral ear a CROS, or strictly bi-CROS device can be used. Wireless CROS devices are available that essentially have their microphone pick up sound from the non-implanted side and wirelessly route it to the CI processor on the other side where it is mixed with the CI processor’s microphone signal. This approach can reduce head shadow, although with stimulation only being delivered to one ear there is little ability to use the CROS device for localization. With the combination of HI and CI companies, for example Phonak and Advanced Bionics within the Sonova company, the migration of HI technology such as the ear-to-ear wireless technology has begun and will likely be more common in future.

In the German healthcare system, a CI is now available to those who suffer from single-sided deafness (SSD). Typically there may be also be some hearing loss on the better hearing side, making this a highly asymmetrical loss rather than a pure SSD. Those suffering with SSD would usually explore a CROS device and a bone conduction hearing aid before considering a CI. In the end around one third of SSD cases seen will elect to get used to hearing with only one ear, one third will use a bone conduction device and one third will receive a CI [51].

Tinnitus is another consideration that can influence treatment options, for SSD and beyond. Where the SSD is accompanied by intractable levels of tinnitus, a CI may provide relief [52]. The restoration of some input to the deafened ear can allow the tinnitus to either effectively disappear or at least be substantially reduced. In some cases, SSD in particular, the relief from tinnitus is found to be of much greater benefit than any hearing sensation arising from the implanted ear. The large majority of CI recipients report reduced amounts of tinnitus although in very rare cases tinnitus can be worsened through implantation.

5.1 Alternative stimulation sites

The cochlea is an attractive site for electrical stimulation, given that it presents tonotopic access to auditory nerve fibers with reasonably straightforward surgical access. However, where the cochlea has not formed properly or at all, due to some extreme malformation, a properly formed cochlea has been filled with bone or tissue, for example following bacterial meningitis, preventing all but minimal surgical access, or the auditory nerve is not available, either through malformation or following trauma, stimulation of the auditory system via the cochlea is not possible. In such cases alternative sites of stimulation may be used.

Auditory brainstem implants (ABIs) bypass the auditory nerve, targeting the next station of the auditory pathway: the cochlear nucleus located in the brainstem [53]. There is a tonotopic structure within the cochlear nucleus, although it is organized in the dimension of depth, so is not easy to access. Attempts to use a penetrating electrode array with a number of discrete needles has not been able to make better use of this tonotopic organization than a pad of flat electrodes placed on the surface of the cochlear nucleus [54]. Programming of an ABI device tends to be more difficult than that of a CI. Bone surrounding the cochlea usually keeps the CI’s stimulation contained to auditory fibers. Only occasionally non-auditory stimulation of, for example, the facial nerve can be seen in muscle twitching around the mouth or eye. This is generally programmed around by deactivating electrode contacts or reducing stimulation levels. However, in the brain stem, functions such as respiration can be adversely effected by an ABI device. This calls for much more care when programming, leading to ABI devices being offered only by specialist centers. The surgery required to place an ABI is more invasive than that required for a CI, for example, requiring lifting of the cerebellum to gain sufficient surgical access. With ABIs being placed following removal of tumors there can be some distortion of brain structures. Some surgeons prefer to remove what are often sizable tumors, allow the brainstem and brain structures to settle into place again and then perform a second surgery during which the ABI is put into place. This two-stage approach is believed to provide less chance of the ABI’s electrode pad moving out of position, risking substantial non-auditory stimulation. Outcomes with ABI devices are generally substantially poorer than with CIs. In some series there is essentially no open-set speech understanding possible [55], while in others the speech understanding is limited, with only occasional high levels of speech understanding [56]. The reasons for poor performance with ABIs are not fully understood. Beyond potential movements of the electrode pad, there are specialized auditory functions being carried out in the cochlear nucleus, meaning that simply assuming raw tonotopic stimulation patterns may not be sufficient. Additionally, those receiving ABI devices may have many other issues beyond deafness and these could also explain some of the difference in outcome.

Stimulation of even higher structures in the auditory system has been attempted through an auditory mid-brain implant (AMBI), where the electrode array is inserted into the inferior colliculus. Currently this is restricted to a pure research device [57]. Within the inferior colliculus it is possible to access a tonotopic organization, using a shortened version of a traditional CI electrode array, 10 mm long as opposed to 20–30 mm for most CI arrays. However, when accessing the auditory system at an even higher level than with an ABI device, the amount of pre-processing that should have already been done leaves a crude CI type coding strategy only able to support very limited outcomes. Already at this level higher stimulation rates are inappropriate, leaving limited sound coding strategy options [58].

While placement of an electrode array in the scala tympani, or where necessary in the scala vestibuli, leaves the electrode contacts quite close to their target neurones they are still some 1–3 mm away. This separation prevents discrete stimulation of local neural populations as discussed above. It has been proposed that an electrode array could be inserted directly into the auditory nerve, or failing this inside the modiolus. Promising results have been shown from acute experiments in cats [59]. Recording form electrodes placed in the inferior colliculus indicates that intra-neural simulation is more localized than stimulation using an electrode array placed in the scala tympani. There are considerable challenges to overcome before intra-neural stimulation could be considered for humans. The surgical access is not straightforward, risking losing all residual hearing. The human auditory nerve being in the order of 1 mm diameter would require very small stimulating contacts. While the stimulation currents required would be smaller than those needed for a traditional CI, charge density considerations require careful consideration. Also, how well an electrode array can be placed and be tolerated in the auditory nerve, without the destruction of auditory fibers or the formation of granulation tissue will need to be carefully studied. Finally, the structure of the auditory nerve is complex, with axons from different parts of the cochlea rolling into the tubular nerve, making tonotopic targeting an additional challenge.

Advertisement

6. Current research

With the CI field involving a wide range of professionals including, surgeons, nurses, audiologists, engineers, physicists, speech and language therapists, teachers of the deaf, hearing therapists, rehabilitation specialists, psychologists and health economists, research related to the field can cover a very wide range. Here some of the key topics that are most closely connected to extending current practice will be reviewed.

It is clear that when less trauma is inflicted during surgery that outcomes are better [60], this being the case whether or not there is residual hearing at risk [61, 62]. Hearing preservation is thus a key topic for surgeons. The design of less traumatic electrode arrays [48, 49, 50, 63] and the development of less traumatic surgical techniques [64], including the use of robot assisted insertion [65], are factors that can lead to reduced trauma. Providing real time feedback to the surgeon during electrode array insertion is a hotly researched area. Electrocochleography (ECochG), where acoustic stimulation of the ear produces a cochlear microphonic signal [66, 67, 68] that the surgeon can use to gauge proximity to structures, such as the basilar membrane, appears promising with clinical systems due to launch in 2019.

The use of drugs to reduce the body’s reaction to implantation is also an area with some connection to minimizing trauma. Steroids such as Dexamethasone or antimitotic drugs [69] have been applied to suppress a fibrotic reaction during and immediately after surgery. Some longer term benefits have been shown but mainly in lower electrode contact impedances rather than significant outcome advantage [70]. Longer term deployment of steroids and other drugs has been proposed for some time [71] but has not yet seen clinical practice. Drugs such as neurotrophins [72] have been proposed to enhance the spiral ganglion but carry considerable risk of uncontrolled sprouting of new fibers that may not lead to any improvement in electrical stimulation [73]. Likewise the regrowth of hair cells or other cochlear structures [74] is an extremely challenging problem, although simply reconnecting peripheral processes that have been damaged while leaving intact hair cells [75] may be more manageable in the foreseeable future. While not a drug, near infrared light has been shown to promote tissue healing [76], helping reduce the extent of hearing loss following cochlear stress [77] and has been proposed as an approach that might also enhance the cochlea’s ability to survive the traumas of electrode array insertion.

Deployment, development and assessment of sound coding strategies continues with a variety of goals. Optimizing compression parameters to maximize speech understanding [78], reviewing the effect of various parameters as well as individualized fitting approaches [79, 80] all promise improvements for strategies that are already available but could be fitted better. Likewise, tools to guide clinicians in fitting for performance, rather than simply for comfort [45] should also lead to substantial improvement in outcomes. Seeking improvement via limiting current spread, through tripolar stimulation [81, 82], phased array [83] or manipulation of field interactions [84] have so far not shown a general improvement, although benefits for some recipients have been demonstrated. New sound coding strategies that attempt to improve temporal information such as FSP [85], or reduce masking effects such as MP3000 [31] have been developed and introduced into clinical practice. There may be more benefit in reducing battery power from the likes of MP3000 or Optima than in any improvement in outcome. Connected to improving outcome, research into the listening effort required to understand speech has been increasing [86, 87].

The very nature of the speech tests used to evaluate CI systems is an active area of research. Standardization across languages has involved the use of matrix tests [88]. These tests involve a fixed syntax and a closed set of keywords so can be self-administered and used essentially indefinitely. However, the sentences are not fully representative of natural sentences. Avoiding fixed presentation levels, something necessary to evaluate the function of AGC has led to development of roving presentation level tests [89, 90, 91]. These tests provide insights into where a particular subject may have problems, so could be useful in supporting programming. As with the STARR tests, multiple speakers have been included in tests such as the AzBio [92] and taken much further with the coordinate response matrix test that can run on a multiple loudspeaker array and hence mimic a more realistic test environment [93].

Improved outcomes have been shown thorough use of the recently developed wireless technology supporting integrated bimodal [94, 95] and CROS [96]. EAS has also been shown to produce substantial improvements in outcome [97] although the test methodology used involves a questionable comparison of conditions.

With increasing numbers of CI recipients needing management by financially constrained health care systems, methods of improving patient management are being developed. These include the use of consumer devices, smart phones and tablets, to run Apps that can evaluate a CI user’s speech understanding and qualitative condition as well as remotely analyzing the status of their implant and sound processor hardware [98] and delivering rehabilitation material usable by adult CI recipients and the families of young children. This whole area will necessarily see much development in the coming years.

The pressures on CI manufacturers to reduce size and improve comfort and ease of use continue, with cosmetic considerations playing a large role in the choice of which CI systems are selected. Reliability of both the implanted and external parts of the CI system needs to be continuously improved, underpinning consistent device use and reducing costs inherent in managing failures. Further pressures on cost are also critical to address so that the enormous unmet need for cochlear implants in developing countries can also be met.

Glossary of technical terms

Alternate-subtraction

a method of removing stimulus artifact that relies on delivering alternate phase stimulation pulses that are added to ideally cancel artifact and reinforce response

Anodic

refers to the positive going phase of a stimulation pulse that is generally assumed to provide change balance, hence avoiding the delivery of potentially harmful direct current

Automatic Gain Control (AGC)

a circuit that compresses the large acoustic dynamic range into a range that is more manageable for the restricted electrical dynamic range

Capacitor

an electrical component that does not allow direct current to pass so offers an additional level of protection to the body in the case of a fault occurring in an implant

Cathodic

refers to the negative going phase of a stimulation pulse that is generally assumed to depolarize neurones hence leading to electrical stimulation

Charge

an electrical measure formed by the product (strictly the integral) of current and time with units of Coulombs. The charge delivered determines the loudness perceived

Cochlear Implant (CI)

a surgically implantable prosthesis that bypasses a damaged cochlea providing hearing through direct electrical stimulation of the auditory nerve

Current source

an electrical circuit that delivers a programmed current, varying the amount of voltage necessary to achieve this depending on the electrical resistance offered by the body

Direct current

an electrical current that flows in one direction only that can be harmful to the body

Electrocochleography (ECochG)

the measurement of electrical signals produced by the cochlea in response to acoustic stimulation that indicates cochlear health

Electrolysis

an electro-chemical process whereby charged particles may be drawn towards an electrode with opposite charge, leading to electrode contact dissolution or tissue damage

Fast Fourier Transform (FFT)

an efficient software algorithm that evaluates the amount of sound energy at regularly spaced frequencies, so allows the channels to be calculated

Impedance

a measure of resistance to the flow of electrical current that takes account of frequency so is more general than resistance which strictly only applies to direct current

Masker-Probe

a method of stimulus artifact removal relying on introducing refraction to identify artifact and then using multiple stages of subtraction to isolate the neural response

Perilymph

a fluid contained within much of the cochlea that is electrically conductive

Resistance

a measure of how much the body will resist the flow of electrical current, strictly only considering direct current, impedance being the a more general factor

Spiral Ganglion Cell

the name given to hearing nerve neurones having a cell body in the cochlea’s modiolus, axon in the auditory nerve and dendrites innervating the inner hair cells

Voltage source

an electrical circuit that delivers a programmed voltage but that will allow current to vary depending on the resistance (impedance) offered by the body

References

  1. 1. Volta A. On the electricity excited by the mere contact of conducting substances of different kinds. Proceedings of the Royal Society. 1800;1:27-29
  2. 2. Djourno A. Eyries, Prosthese auditive par excitation electrique a distance du nerf sensoriel a’ l’aide d’un bobinage inclus a demeure. La Presse Médicale. 1957;65:1417
  3. 3. Simmons FB. Electrical stimulation of the auditory nerve in man. Archives of Otolaryngology. 1966;84(1):2-54
  4. 4. Michelson RP. Electrical stimulation of the human cochlea. A preliminary report. Archives of Otolaryngology. 1971;93(3):317-323
  5. 5. Chouard CH, MacLeod P. Implantation of multiple intracochlear electrodes for rehabilitation of total deafness: Preliminary report. Laryngoscope. 1976;86(11):1743-1751
  6. 6. House WF. Goals of the cochlear implant. Laryngoscope. 1974;84(11):1883-1887
  7. 7. Graham JM, Hazell JW. Electrical stimulation of the human cochlea using a transtympanic electrode. British Journal of Audiology. 1977;11(2):59-62
  8. 8. Hochmair ES, Hochmair-Desoyer IJ, Burian K. Experience with implanted auditory nerve stimulator. Transactions—American Society for Artificial Internal Organs. 1979;25:357-361
  9. 9. Clark GM, Tong YC, Martin LF. A multiple-channel cochlear implant: An evaluation using open-set CID sentences. Laryngoscope. 1981;91(4):628-634
  10. 10. Shannon RV. Multichannel electrical stimulation of the auditory nerve in man. I. Basic psychophysics. Hearing Research. 1983;11:157-189
  11. 11. Frijns J, Kalkman R, Vanpoucke F, Bongers J, Briaire J. Simultaneous and non-simultaneous dual electrode stimulation in cochlear implants: Evidence for two neural response modalities. Acta Oto-Laryngologica. 2009;129(4):433-439
  12. 12. Firszt JB, Koch DB, Downing M, Litvak L. Current steering creates additional pitch percepts in adult cochlear implant recipients. Otology & Neurotology. 2007;28(5):629-636
  13. 13. McDermott HJ, McKay CM. Pitch ranking with nonsimultaneous dual-electrode electrical stimulation of the cochlea. The Journal of the Acoustical Society of America. 1994;96:155-162
  14. 14. Shepherd RK, Linahan N, Xu J, Clark GM, Araki S. Chronic electrical stimulation of the auditory nerve using non-charge-balanced stimuli. Acta Oto-Laryngologica. 1999;119(6):674-684
  15. 15. Wang JT, Wang AY, Psarros C, Da Cruz M. Rates of revision and device failure in cochlear implant surgery: A 30-year experience. Laryngoscope. 2014;124(10):2393-2399
  16. 16. van der Marel KS, Briaire JJ, Verbist BM, Joemai RM, Boermans PP, Peek FA, et al. Cochlear reimplantation with same device: Surgical and audiologic results. Laryngoscope. 2011;121(7):1517-1524
  17. 17. Leung RT, Shivdasani MN, Nayagam DA, Shepherd RK. In vivo and in vitro comparison of the charge injection capacity of platinum macroelectrodes. IEEE Transactions on Biomedical Engineering. 2015;62(3):849-857
  18. 18. Stover T, Lenarz T. Biomaterials in cochlear implants. GMS Current Topics in Otorhinolaryngology—Head and Neck Surgery. 2009;8(10):10
  19. 19. Shepherd RK, Carter P, Enke YL, Wise AK, Fallon JB. Chronic intracochlear electrical stimulation at high charge densities results in platinum dissolution but not neural loss or functional changes in vivo. Journal of Neural Engineering. 2018;5(10):1741-2552
  20. 20. Zeng F-G. Compression and cochlear implants. In: Bacon SP, Popper AN, Fay RR, editors. Compression: From Cochlea to Cochlear Implants. New York: Springer; 2004. pp. 184-220
  21. 21. Chatterjee M, Fu QJ, Shannon RV. Effects of phase duration and electrode separation on loudness growth in cochlear implant listeners. The Journal of the Acoustical Society of America. 2000;107(3):1637-1644
  22. 22. Stöbich B, Zierhofer CM, Hochmair ES. Influence of automatic gain control parameter settings on speech understanding of cochlear implant users employing the continuous interleaved sampling strategy. Ear and Hearing. 1999;20:104-116
  23. 23. Boyle P, Buchner A, Stone M, Lenarz T, Moore B. Comparison of dual-time-constant and fast-acting automatic gain control (AGC) systems in cochlear implants. International Journal of Audiology. 2009;48(4):211-221
  24. 24. Khing PP, Swanson BA, Ambikairajah E. The effect of automatic gain control structure and release time on cochlear implant speech intelligibility. PLoS One. 2013;8(11):e82263. pp. 1-11
  25. 25. Stone MA, Moore BCJ. Side effects of fast-acting dynamic range compression that affect intelligibility in a competing speech task. The Journal of the Acoustical Society of America. 2004;116:2311-2323
  26. 26. Wilson BS, Finley CC, Lawson DT, Wolford RD, Eddington DK, Rabinowitz WM. Better speech recognition with cochlear implants. Nature. 1991;352:236-238
  27. 27. Wilson BS, Finley CC, Lawson DT, Wolford RD, Zerbi M. Design and evaluation of a continuous interleaved sampling (CIS) processing strategy for multichannel cochlear implants. Journal of Rehabilitation Research and Development. 1993;30(1):110-116
  28. 28. Buechner A, Frohne-Buechner C, Stoever T, Gaertner L, Battmer R, Lenarz T. Comparison of a paired or sequential stimulation paradigm with advanced bionics’ high-resolution mode. Otology & Neurotology. 2005;26(5):941-947
  29. 29. Patrick JF, Busby PA, Gibson PJ. The development of the nucleus(R) FreedomTM cochlear implant system. Trends in Amplification. 2006;10(4):175-200
  30. 30. Tye-Murray N, Lowder M, Tyler RS. Comparison of the F0F2 and F0F1F2 processing strategies for the Cochlear Corporation cochlear implant. Ear and Hearing. 1990;11(3):195-200
  31. 31. Buchner A, Nogueira W, Edler B, Battmer RD, Lenarz T. Results from a psychoacoustic model-based strategy for the nucleus-24 and freedom cochlear implants. Otology & Neurotology. 2008;29(2):189-192
  32. 32. Moore BCJ. Psychoacoustics of normal and impaired hearing. British Medical Bulletin. 2002;63:121-134
  33. 33. Koch DB, Downing M, Osberger MJ, Litvak L. Using current steering to increase spectral resolution in CII and HiRes 90K users. Ear and Hearing. 2007;28(2 Suppl):38S-41S
  34. 34. Kamakura T, Nadol JB Jr. Correlation between word recognition score and intracochlear new bone and fibrous tissue after cochlear implantation in the human. Hearing Research. 2016;339:132-141
  35. 35. Seyyedi M, Viana LM, Nadol JB Jr. Within-subject comparison of word recognition and spiral ganglion cell count in bilateral cochlear implant recipients. Otology & Neurotology. 2014;35(8):1446-1450
  36. 36. van der Beek FB, Briaire JJ, van der Marel KS, Verbist BM, Frijns JH. Intracochlear position of Cochlear implants determined using CT scanning versus fitting levels: Higher threshold levels at basal turn. Audiology & Neuro-Otology. 2016;21(1):54-67
  37. 37. Dietz A, Wennstrom M, Lehtimaki A, Lopponen H, Valtonen H. Electrode migration after cochlear implant surgery: More common than expected? European Archives of Oto-Rhino-Laryngology. 2015;12:12
  38. 38. van der Marel K, Verbist B, Briaire J, Joemai R, Frijns J. Electrode migration in cochlear implant patients: Not an exception. Audiology & Neuro-Otology. 2012;17(5):275-281
  39. 39. Abbas PJ, Brown CJ. Electrically evoked auditory brainstem response: Growth of response with current level. Hearing Research. 1991;51(1):123-137
  40. 40. Cohen LT, Richardson LM, Saunders E, Cowan RS. Spatial spread of neural excitation in cochlear implant recipients: comparison of improved ECAP method and psychophysical forward masking. Hearing Research. 2003;179(1-2):72-87
  41. 41. Eisen MD, Franck KH. Electrically evoked compound action potential amplitude growth functions and HiResolution programming levels in pediatric CII implant subjects. Ear and Hearing. 2004;25(6):528-538
  42. 42. Van der Beek FB, Frijns J. Population-based prediction of fitting levels for individual cochlear implant recipients. Audiology and Neurotology. 2014;19(20):1-16
  43. 43. Hodges AV, Balkany TJ, Ruth RA, Lambert PR, Dolan-Ash S, Schloffman JJ. Electrical middle ear muscle reflex: Use in Cochlear implant programming. Otolaryngology—Head and Neck Surgery. 1997;117(3):255-261
  44. 44. Allum JH, Shallop JK, Hotz M, Pfaltz CR. Characteristics of electrically evoked ‘auditory’ brainstem responses elicited with the nucleus 22-electrode intracochlear implant. Scandinavian Audiology. 1990;19(4):263-267
  45. 45. Govaerts P, Vaerenberg B, De Ceulaer G, Daemers K, De Beukelaer C, Schauwers K. Development of a software tool using deterministic logic for the optimization of cochlear implant processor programming. Otology & Neurotology. 2010;31(6):908-918
  46. 46. Gajadeera EA, Galvin KL, Dowell RC, Busby PA. The change in electrical stimulation levels during 24 months Postimplantation for a large cohort of adults using the nucleus(R) Cochlear implant. Ear and Hearing. 2017;38(3):357-367
  47. 47. Davis A. Hearing in Adults. London: Whurr; 1995
  48. 48. Skarzynski H, Lorens A, Matusiak M, Porowski M, Skarzynski P, James C. Partial deafness treatment with the nucleus straight research array cochlear implant. Audiology & Neuro-Otology. 2012;17(2):82-91
  49. 49. Lenarz T, Stover T, Buechner A, Lesinski-Schiedat A, Patrick J, Pesch J. Hearing conservation surgery using the Hybrid-L electrode. Results from the first clinical trial at the Medical University of Hannover. Audiology & Neuro-Otology. 2009;14(Suppl 1):22-31
  50. 50. Gantz BJ, Hansen MR, Turner CW, Oleson JJ, Reiss LA, Parkinson AJ. Hybrid 10 clinical trial: Preliminary results. Audiology & Neuro-Otology. 2009;1:32-38
  51. 51. Arndt S, Laszig R, Aschendorff A, Hassepass F, Beck R, Wesarg T. Cochlear implant treatment of patients with single-sided deafness or asymmetric hearing loss. German version. Hals-Nasen-Ohrenheilkunde. 2017;65(7):586-598
  52. 52. Mertens G, De Bodt M, Van de Heyning P. Cochlear implantation as a long-term treatment for ipsilateral incapacitating tinnitus in subjects with unilateral hearing loss up to 10 years. Hearing Research. 2016;331:1-6
  53. 53. Shannon RV, Fayad J, Moore J, Lo WW, Otto S, Nelson RA, et al. Auditory brainstem implant: II. Postsurgical issues and performance. Otolaryngology and Head and Neck Surgery. 1993;108(6):634-642
  54. 54. Otto SR, Shannon RV, Wilkinson EP, Hitselberger WE, McCreery DB, Moore JK, et al. Audiologic outcomes with the penetrating electrode auditory brainstem implant. Otology & Neurotology. 2008;29(8):1147-1154
  55. 55. Shannon RV. Auditory implant research at the house ear institute 1989-2013. Hearing Research. 2015;322:57-66
  56. 56. Colletti V, Shannon R, Carner M, Veronese S, Colletti L. Outcomes in nontumor adults fitted with the auditory brainstem implant: 10 years’ experience. Otology & Neurotology. 2009;30(5):614-618
  57. 57. Lenarz T, Lim HH, Reuter G, Patrick JF, Lenarz M. The auditory midbrain implant: A new auditory prosthesis for neural deafness-concept and device description. Otology & Neurotology. 2006;27(6):838-843
  58. 58. Rode T, Hartmann T, Hubka P, Scheper V, Lenarz M, Lenarz T, et al. Neural representation in the auditory midbrain of the envelope of vocalizations based on a peripheral ear model. Frontiers in Neural Circuits. 2013;7(166):18
  59. 59. Middlebrooks JC, Snyder RL. Intraneural stimulation for auditory prosthesis: Modiolar trunk and intracranial stimulation sites. Hearing Research. 2008;242(1-2):52-63
  60. 60. Finley C, Holden T, Holden L, Whiting B, Chole R, Neely G, et al. Role of electrode placement as a contributor to variability in cochlear implant outcomes. Otology & Neurotology. 2008;29(7):920-928
  61. 61. Aschendorff A, Kromeier J, Klenzner T, Laszig R. Quality control after insertion of the nucleus contour and contour advance electrode in adults. Ear and Hearing. 2007;28(2 Suppl):75S-79S
  62. 62. Fitzpatrick DC, Campbell AP, Choudhury B, Dillon MT, Forgues M, Buchman CA, et al. Round window electrocochleography just before cochlear implantation: Relationship to word recognition outcomes in adults. Otology & Neurotology. 2014;35(1):64-71
  63. 63. Dhanasingh A, Jolly C. An overview of cochlear implant electrode array designs. Hearing Research. 2017;356:93-103
  64. 64. Todt I, Mittmann P, Ernst A. Hearing preservation with a Midscalar electrode comparison of a regular and steroid/pressure optimized surgical approach in patients with residual hearing. Otology & Neurotology. 2016;37:e349-e352
  65. 65. Torres R, Jia H, Drouillard M, Bensimon JL, Sterkers O, Ferrary E, et al. An optimized robot-based technique for cochlear implantation to reduce array insertion trauma. Otolaryngology and Head and Neck Surgery. 2018;159(5):900-907
  66. 66. Giardina CK, Brown KD, Adunka OF, Buchman CA, Hutson KA, Pillsbury HC, et al. Intracochlear electrocochleography: Response patterns during cochlear implantation and hearing preservation. Ear and Hearing. 2019. (In Press)
  67. 67. Bester CW, Campbell L, Dragovic A, Collins A, O’Leary SJ. Characterizing electrocochleography in cochlear implant recipients with residual low-frequency hearing. Frontiers in Neuroscience. 2017;11(141):1-8
  68. 68. Harris MS, Riggs WJ, Koka K, Litvak LM, Malhotra P, Moberly AC, et al. Real-time intracochlear electrocochleography obtained directly through a cochlear implant. Otology & Neurotology. 2017;38(6):e107-e113
  69. 69. Jia H, Francois F, Bourien J, Eybalin M, Lloyd RV, Van De Water TR, et al. Prevention of trauma-induced cochlear fibrosis using intracochlear application of anti-inflammatory and antiproliferative drugs. Neuroscience. 2016;316:261-278
  70. 70. Wilk M, Hessler R, Mugridge K, Jolly C, Fehr M, Lenarz T, et al. Impedance changes and fibrous tissue growth after cochlear implantation are correlated and can be reduced using a dexamethasone eluting electrode. PLoS One. 2016;11(2):e0147552. p. 19
  71. 71. Farahmand Ghavi F, Mirzadeh H, Imani M, Jolly C, Farhadi M. Corticosteroid-releasing cochlear implant: A novel hybrid of biomaterial and drug delivery system. Journal of Biomedical Materials Research. Part B, Applied Biomaterials. 2010;94(2):388-398
  72. 72. Schindler JS, Schindler RA. Neurotrophin action in the cochlea: Implications for cochlear implants. Advances in Oto-Rhino-Laryngology. 1997;52:8-14
  73. 73. Leake PA, Stakhovskaya O, Hetherington A, Rebscher SJ, Bonham B. Effects of brain-derived neurotrophic factor (BDNF) and electrical stimulation on survival and function of cochlear spiral ganglion neurons in deafened, developing cats. Journal of the Association for Research in Otolaryngology. 2013;14(2):187-211
  74. 74. Taylor RR, Filia A, Paredes U, Asai Y, Holt JR, Lovett M, et al. Regenerating hair cells in vestibular sensory epithelia from humans. eLife. 2018;18(7):34817
  75. 75. Wan G, Gomez-Casati ME, Gigliello AR, Liberman MC, Corfas G. Neurotrophin-3 regulates ribbon synapse density in the cochlea and induces synapse regeneration after acoustic trauma. eLife. 2014;20(3):03564
  76. 76. Whelan HT, Smits RL Jr, Buchman EV, Whelan NT, Turner SG, Margolis DA, et al. Effect of NASA light-emitting diode irradiation on wound healing. Journal of Clinical Laser Medicine & Surgery. 2001;19(6):305-314
  77. 77. Bartos A, Grondin Y, Bortoni ME, Ghelfi E, Sepulveda R, Carroll J, et al. Pre-conditioning with near infrared photobiomodulation reduces inflammatory cytokines and markers of oxidative stress in cochlear hair cells. Journal of Biophotonics. 2016;9(11-12):1125-1135
  78. 78. Boyle PJ, Moore BCJ. Balancing cochlear implant AGC and near-instantaneous compression to improve perception of soft speech. Cochlear Implants International. 2015;16(S1):S9-S11
  79. 79. Holden LK, Finley CC, Firszt JB, Holden TA, Brenner C, Potts LG, et al. Factors affecting open-set word recognition in adults with cochlear implants. Ear and Hearing. 2013;34(3):342-360
  80. 80. Baudhuin J, Cadieux J, Firszt J, Reeder R, Maxson J. Optimization of programming parameters in children with the advanced bionics cochlear implant. Journal of the American Academy of Audiology. 2012;23(5):302-312
  81. 81. Fielden CA, Kluk K, Boyle PJ, McKay CM. The perception of complex pitch in cochlear implants: A comparison of monopolar and tripolar stimulation. The Journal of the Acoustical Society of America. 2015;138(4):2524-2536
  82. 82. Srinivasan AG, Padilla M, Shannon RV, Landsberger DM. Improving speech perception in noise with current focusing in cochlear implant users. Hearing Research. 2013;299:29-36
  83. 83. Smith ZM, Parkinson WS, Long CJ. Multipolar current focusing increases spectral resolution in cochlear implants. Conference Proceedings: Annual International Conference of the IEEE Engineering in Medicine and Biology Society. 2013;9(10):6610121
  84. 84. Kalkman RK, Briaire JJ, Frijns JH. Stimulation strategies and electrode design in computational models of the electrically stimulated cochlea: An overview of existing literature. Network. 2016;27(2-3):107-134
  85. 85. Muller J, Brill S, Hagen R, Moeltner A, Brockmeier SJ, Stark T, et al. Clinical trial results with the MED-EL fine structure processing coding strategy in experienced cochlear implant users. ORL: Journal for Otorhinolaryngology and Its Related Specialties. 2012;74(4):185-198
  86. 86. Gagne JP, Besser J, Lemke U. Behavioral assessment of listening effort using a dual-task paradigm. Trends in Hearing. 2017;21:2331216516687287
  87. 87. Firszt JB, Reeder RM, Holden LK, Dwyer NY. Results in adult cochlear implant recipients with varied asymmetric hearing: A prospective longitudinal study of speech recognition, localization, and participant report. Ear and Hearing. 2018;39(5):845-862
  88. 88. Kollmeier B, Warzybok A, Hochmuth S, Zokoll MA, Uslar V, Brand T, et al. The multilingual matrix test: Principles, applications, and comparison across languages: A review. International Journal of Audiology. 2015;2:3-16
  89. 89. Boyle P, Nunn T, O’Connor A, Moore B. STARR: A speech test for evaluation of the effectiveness of auditory prostheses under realistic conditions. Ear and Hearing. 2013;34(2):203-212
  90. 90. Haumann S, Lenarz T, Buchner A. Speech perception with cochlear implants as measured using a roving-level adaptive test method. ORL: Journal for Otorhinolaryngology and Its Related Specialties. 2010;72(6):312-318
  91. 91. Dincer D’Alessandro H, Ballantyne D, Boyle PJ, De Seta E, DeVincentiis M, Mancini P. Temporal fine structure processing, pitch, and speech perception in adult cochlear implant recipients. Ear and Hearing. 2018;39(4):679-686
  92. 92. Spahr A, Dorman M, Litvak L, Van Wie S, Gifford R, Loizou P, et al. Development and validation of the AzBio sentence lists. Ear and Hearing. 2012;33(1):112-117
  93. 93. Kitterick PT, Bailey PJ, Summerfield AQ. Benefits of knowing who, where, and when in multi-talker listening. The Journal of the Acoustical Society of America. 2010;127(4):2498-2508
  94. 94. Veugen LC, Chalupper J, Snik AF, van Opstal AJ, Mens LH. Frequency-dependent loudness balancing in bimodal cochlear implant users. Acta Oto-Laryngologica. 2016;136(8):775-781
  95. 95. Dorman MF, Cook Natale S, Agrawal S. The value of unilateral CIs, CI-CROS and bilateral CIs, with and without beamformer microphones, for speech understanding in a simulation of a restaurant environment. Audiology & Neuro-Otology. 2018;23(5):270-276
  96. 96. Snapp HA, Hoffer ME, Spahr A, Rajguru S. Application of wireless contralateral routing of signal technology in unilateral cochlear implant users with bilateral profound hearing loss. Journal of the American Academy of Audiology. 2018;29(10):17121
  97. 97. Friedmann DR, Peng R, Fang Y, McMenomey SO, Roland JT, Waltzman SB. Effects of loss of residual hearing on speech performance with the CI422 and the hybrid-L electrode. Cochlear Implants International. 2015;26:26
  98. 98. Cullington H, Kitterick P, Weal M, Margol-Gromada M. Feasibility of personalised remote long-term follow-up of people with cochlear implants: A randomised controlled trial. BMJ Open. 2018;8(4):2017-019640

Written By

Patrick J. Boyle

Submitted: 15 October 2018 Reviewed: 18 February 2019 Published: 31 May 2019