Open access peer-reviewed chapter

Perceptual Learning of Uncategorized Arabic Phonemes among Congenitally Deaf, Non-native Children with Cochlear Implants

Written By

Farheen Naz Anis and Cila Umat

Submitted: 26 December 2022 Reviewed: 07 March 2023 Published: 10 June 2023

DOI: 10.5772/intechopen.110808

From the Edited Volume

Latest Advances in Cochlear Implant Technologies and Related Clinical Applications

Edited by Stavros Hatzopoulos, Andrea Ciorba and Piotr H. Skarzynski

Chapter metrics overview

52 Chapter Downloads

View Full Metrics

Abstract

The advancement in cochlear implant (CI) technologies and how CIs help their users have far exceeded expectations. Speech perception remains the focus of many studies related to cochlear implant clinical research to ensure the technology maximizes the benefits to be obtained by CI users. This chapter will discuss the perception of non-native sounds among congenitally deaf pediatric CI users, specifically emphasizing Arabic consonants. This language is used and learned by billions of non-native Arabs worldwide. Non-native auditory signals are perceived differently by children with CI due to speech processor signal processing and native language learning effects. This study measured the perceptual learning of uncategorized-dispersed-assimilated Arabic consonants for a group of non-native children with CI using a newly developed, FizBil© bottom-up, customized software training module. The framework and hypothetical pathway will be discussed.

Keywords

  • cochlear implants
  • children
  • uncategorized non-native arabic phonemes
  • perceptual learning
  • bottom-up training

1. Introduction

Superior speech perception is perhaps the most significant outcome of cochlear implantation and directly correlates with linguistic, social, and learning outcomes [1, 2, 3, 4]. Consequently, children who can function well auditorily with a cochlear implant (CI) can attend mainstream schools [1, 5, 6, 7] and learn a new language [8, 9, 10, 11, 12, 13] within the normative range [14, 15, 16, 17]. Furthermore, research shows that children with CI perform well on non-native perceptual listening tasks [11, 18, 19, 20, 21, 22]. Conversely, there is evidence showing that normal hearing [23] and deaf children with CI struggle with non-native phonemes perception [6, 7], production [24], and language learning [12, 25, 26].

Speech sound is a complex signal that carries information in the form of acoustic cues. These acoustic cues (usually in combination) represent the phonological categories, that is, place of articulation, manner of articulation, and voicing. To recognize the speech sounds, listeners need to categorize these acoustic signals. Hearing with a cochlear implant is not analogous to acoustic hearing. The auditory brain of children with CI processes signals differently from their normal-hearing (NH) peers. In CI, input signals are coded by electrical pulses, while the brain processes acoustic input signals in normal-hearing listeners. Complex temporal and spatial excitation patterns represent a sound signal in approximately 20,000 auditory neurons [27] in a typical auditory pathway. On the other hand, currently, available electrodes for commercial CI systems have a maximum of 22 excitation points for neurons to deliver sound signals to the auditory brain. A comparison of signal pathways for sound processing via acoustic hearing and a cochlear implant is shown in Figure 1.

Figure 1.

Comparison of signal pathways for sound processing via normal acoustic hearing and electrical hearing via a cochlear implant.

Therefore, children with CI need to learn speech perception from the poorer frequency resolution than their NH peers [28, 29, 30]. Apart from a poorer frequency resolution, cross-channel interactions [30], electrode discrimination ability, and place-pitch shifting due to electrode insertion position [28, 29, 31] are some other factors contributing in poorer speech perception scores in children with CI as compared to their NH peers. Phoneme perception is a categorical process that involves the classification and grouping of the information-bearing acoustic signal. Phonological features information is conveyed via multiple cues. For example, the features of voice and manner are transmitted via the temporal cues, namely: the envelope and the periodic cues. On the other hand, the temporal fine structures and spectral-domain, that is, transition, coded the place of articulation [30, 31, 32]. Research showed that phonemes perception scores for children with CI were generally poorer than the NH children matched for their hearing age. The hearing deprivation period may play an important role [32] in poorer native language speech perception performance. Non-native children with CI learn to produce and maintain the voicing of articulation of native and non-native phonemes [8, 18, 19]. Literature review revealed that CI users show systematic deficits in perceiving phonological features, especially place feature [28, 30, 32] that relies heavily on spectral information. For CI listeners who are less able to accurately discriminate the place of articulation of even native phonemes [30, 32, 33, 34, 35], the difficulty level is more pronounced for the non-native phonological features.

Non-Arab Muslims around the globe listen, read, and learn Al-Quran daily to fulfill their religious obligations and prayers. Likewise, children start listening to Al-Quran from a young age when they visit mosques and religious gatherings with their parents. A large body of research indicates that the perception of non-native phonemes is influenced by the native language phonological inventory in normal hearing (NH) [36, 37, 38, 39, 40, 41, 42, 43]. In recent years, there has been renewed interest in the non-native perception among CI users [6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22]. Generally, according to the perception assimilation model for non-native languages (PAM-L2), a typical hearing listener’s capacity to perceive a non-native phoneme is influenced by the phonological features of their native language [43]. Listeners use native articulatory patterns to detect the similarities of non-native phonemes with their native languages [37, 38, 39, 40, 43, 44, 45, 46], a process known as assimilation. Various possible assimilation categories for non-native phonemes fall within the perceptual space of native language. Malay and Arabic phonological repertoires are generally different [37, 39]. Arabic has unique, distinctive phonological features that do not exist in the Malay phonology repertoire. Therefore, Malay speakers face difficulties recognizing these Arabic sounds from Al-Quran [36, 47]. Evidence shows that posterior and emphatic fricatives ([/q/ ق], [/χ/ خ], [/ʁ/ غ], [/ħ/ ح], [/ʕ/ ع], [/h/ ﻫ], and [/ðˁ/ ظ]) are most difficult sound to utter accurately by NH school-age children [6, 23, 24]. However, in Malaysia, children are exposed to Arabic phonemes in early childhood, that is, 5.0–6.0 years [23, 24]. Consequently, it is anticipated that NH children can develop good perception and recognition skills for Arabic phonemes and acquire reading skills in different writing scripts. Table 1 compares Arabic & Malay phonological inventory.

BilabialLabio-dentalDentalAlveolarPost-alveolarPalatalVelarUvularPharyngealGlottal
Plosivep bt d
tˁ dˁ
kqɁ
Nasalmnɲŋ
Trillr
Fricative(f)θðs(z)(ʃ)ɣχħ ʕh
ðˁ
Affricateʤ
Glideswj
Liquidl
IPA symbol to the right represents a voiced phoneme, while the symbol to the left represents a voiceless phoneme.
KeyBlack regularOnly occur in Malay inventory
Blue boldOccur in both language inventories (familiar for Malay listeners)
Italic with bracketOnly borrowed words in Malaysian inventory
Red ItalicOnly occur in Arabic inventory (unfamiliar for Malay listeners). These cons

Table 1.

Comparison of Malay-Arabic phonological inventory.

Source: [6, 7, 47].

Hereafter, it is expected that Malay listeners have a degree of perceptual difficulties with Arabic phonemes as many Malaysians study Arabic as part of the National school curriculum. It is predictable that Arabic language phonological categories fall within the Malay perceptual space due to the frequent use of Arabic in Malay culture. However, our earlier study [6, 7] has shown that for unfamiliar phonemes phonological features’ information transmission depends on the category formation in the perceptual space of listeners. It was found that unfamiliar phonemes with a unique category of secondary articulation (/dˁ, ðˁ, sˁ, tˁ/) show dispersed uncategorized assimilation. The unfamiliar phonemes (/q, ħ, ʕ/) with the close phonological boundaries with some familiar consonants (/k, h, Ɂ) show focalized, uncategorized assimilation. On the other hand, unfamiliar phonemes with a unique subcategory without secondary articulation and close phonological boundaries (/χ, ʁ/) show clustered uncategorized assimilation.

Arabic, the Al-Quran language, is learned by 2.2 billion non-native listeners and readers worldwide. Parents in the non-Arab country send their children to special learning classes for Arabic listening and reading skills. In Malaysia, such as many non-Arabian Muslim countries, Arabic learning is linked with religion and taught in religious schools [23, 24]. On the other hand, parents of children with CI consider Al-Quran education as fundamental for their CI children for their normal-hearing siblings. Therefore, it is essential to understand the Arabic phonemes perception process in children with CI. Therefore, this study was designed to answer whether perceptual learning for unfamiliar non-native phonological features (posterior place or emphatic manner) [6, 7] occurs with customized training. In this study, FizBil©, a software-based, bottom-up training module with specific interstimulus intervals for children with CI was developed, and formative evaluation was done on a small group of NH and CI children. The result of training with the FizBil© software from one child with CI who completed the 12-week training program will be reported and discussed.

Advertisement

2. Conceptual framework for design, development, and formative evaluation

The following underlying theories were considered in designing this perceptual training module, which involved identification and discrimination. In addition, a hypothetical pathway shown in Figure 2 was proposed as the conceptual framework to explain the results of this study. Figure 2 demonstrates that non-native speech perception depends on categorizing composite sound signals within the auditory brain. Perceptual categorization is based on familiarity. Native phonological features subcategories build up within the listener’s perceptual space with signal exposure. According to the signal detection theory (SDT), perceptual categorization of a phoneme is a decision-making process [52, 53]. Perceptual categorization is established on the distance between the perceptual peaks of two signals [54, 55, 56]. The identification task [39, 41, 57, 58, 59, 60] helps listeners attend to relevant between-category differences via top-down processing. In contrast, discrimination tasks [42, 48, 49, 50, 51, 57, 61] focus on within-category variability, that is, bottom-up perception [59]. As illustrated in Figure 2, children with CI need discrimination training to sharpen the categorization cues [41, 48, 49, 50, 51, 60, 61]. Our earlier study [6, 7] revealed more than one phonological feature category mismatch among children with CI. A processing time of 10–500 msec was needed for the discrimination task [49, 62, 63, 64, 65, 66, 67].

Figure 2.

The conceptualized pathway of signal detection and assimilation of non-native phonemes involves top-down and bottom-up signal processing. Source: [29, 31, 35, 48, 49, 50, 51].

According to the American Psychological Association [68], “perceptual learning occurs when repeated exposure enhances the ability to discriminate between two (or more) otherwise confusable stimuli. Therefore, perceptual learning is the ability to discriminate information from closely related signals with training. Speech perceptual difficulties in non-native listening tasks are well studied and documented [36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 69, 70, 71, 72, 73, 74, 75, 76]. There are two very well-studied models of speech perception: First, the speech learning model (SLM) suggests a strong positive correlation between the perception of non-native sounds and their production accuracies [70, 71]. Secondly, the perceptual assimilation model (PAM) further elaborates that non-native speech perception difficulties are related to the phonological or acoustic feature perception [36, 38, 39, 40, 45, 69] and disembarked from the maturation of native language perceptual space [37, 43, 44]. Hence, improving perception could improve non-native phonemes production [36, 37, 38, 39, 40, 41, 42, 43, 44, 70, 71, 72, 75].

2.1 Study objectives

The specific objectives of this study were to design (Phase I) and develop (Phase II), to conduct formative evaluation (Phase III) of software-based and perceptual learning (Phase IV) of Arabic phonemes for Malay children with CI utilizing discrimination and identification tasks. Specifically, the following were the objectives:

  1. To determine the specific interstimulus interval (ISI) needed by children with CI for speech discrimination tasks.

  2. To evaluate the program code for the software FizBil©.

  3. To examine the suitability of the graphical user interface for NH children.

  4. To scrutinize the suitability of the graphical user interface for children with CI.

  5. To examine whether the customized bottom-up training using identification and discrimination tasks improved the perceptual learning index (d’ score).

Advertisement

3. Methodology

3.1 Research design

This research was a design, development, and training experiment. The study consists of four phases in which four pilot studies and one case study were carried out. Each study was a listening experiment where stimuli were presented in a controlled condition, and the listener’s responses (tokens) were collected. The overall methodological design and summary of all the pilot studies are shown in Table 2.

PhaseExpNativeHearingNPurposeExperimental conditionTokens
IPilot AMalayCI2To determine the suitable inter-stimulus interval (ISI) for CI children training12 stimuli × 3 ph. pairs × 3track × 4RT = 432 tokens
(4 conditions (ds))
(1 condition (id))
3456
IIPilot BArabNH2To check the program code30 stimuli × 4 phonemes × 3 contrast × 2 tasks = 720 token1440 + 720
Pilot CMalayNH5To check the suitability of the user interface for children30 stimuli × 1 phoneme × 3 contrasts × 2 tasks = 180 tokens900
IIIPilot DMalayCI2To examine the suitability of the user interface for CI children30 stimuli × 4 phonemes × 3 contrasts × 2 task (2 conditions)1440
IVTraining case studyMalayCI1To examine the customized bottom-up training using identification and discrimination tasks improved the perceptual learning index (d’ score).30 stimuli × 4 phonemes × 3 contrasts × 2
task (Pretest)
720
30 stimuli × 4 phonemes × 3 contrasts × 2
Task × 4 training with feedback
2880
30 stimuli × 4 phonemes × 3 contrasts × 2 task (posttest)720

Table 2.

Overview of methodological research design for the study.

3.2 Research location

Pilot studies A and D, which involved children with CI, were conducted in the soundproof audiology rooms in the Audiology Clinic, Universiti Kebangsaan Malaysia (UKM), Jalan Temerloh, Kuala Lumpur. Pilot studies B and C, which involved NH children, were conducted at the participants’ homes with test items presented via the loudspeaker on the laptop.

3.3 Demographic characteristics of research participants

The research participants in this study comprised of two groups: (i) The test group (CI children) and (ii) the control group (NH children)

3.3.1 Test Group (CI: Pilot studies A, D, and training study)

A total of five Malay deaf children with CI were recruited for the study from the UKM Cochlear Implant Program. Two children participated in pilot study A to determine the interstimulus interval for the discrimination task (ISI-d) and the interstimulus interval for the speech recognition task (ISI-r). Another two of these children participated in pilot study D, where the usability of training and testing modes were evaluated. Finally, one child completed the training in study IV, that is, 12 weeks of training regimes.

The inclusion criteria for the participants with CI were detailed below:

  1. Prelingually deaf Malay Muslim children with CI.

  2. Had at least 4 years of hearing experience with their implants.

  3. Using auditory-verbal communication mode.

  4. Attended mainstream school and religious classes (Kelas Agama Fardu Ain: KAFA) in the Malaysian Islamic Education curriculum to read the Holy Quran (at the time of the study).

  5. They did not have any additional disabilities beside hearing impairment.

The exclusion criteria for the participants with CI were detailed below:

  1. Those who were not using the Nucleus 24 cochlear implant system or its later generation.

  2. Those with partial electrode insertion.

3.3.2 Control group (NH: Pilot studies B and C)

The study was advertised to recruit NH participants. They must fulfill the inclusion criteria below to participate in this study:

  1. The child had normal hearing and speech development, as reported by the parents.

  2. The child had no history of language disorders or learning difficulties.

  3. Chronological age between 7 and 10 years old.

  4. All Malay NH participants attended school & KAFA classes only and learned Arabic and Islamic education in the school following the national Islamic curriculum.

  5. Native Arabic children relocated to Malaysia and learned in Arabic international school in Kuala Lumpur.

The demographic characteristics of study participants are summarized in Table 3.

S #IDC. AgeNatH. statusH. AgeSpeech processorExperimentStudy phase
1C1018 yrs.MCI6 yrs.Nucleus 5pilot –ADesign
2C10210 yrs.MCI8 yrs.Nucleus 5pilot –ADesign
3A10112 yrs.ANH12 yrs.NApilot –BDesign
4A1027 yrs.ANH7 yrs.NApilot –BDesign
5N1018 yrs.MNH8 yrs.NApilot –CDevelopment
6N1028 yrs.MNH8 yrs.NApilot –CDevelopment
7N1038 yrs.MNH8 yrs.NApilot –CDevelopment
8N10410 yrs.MNH10 yrs.NApilot –CDevelopment
9N1059 yrs.MNH9 yrs.NApilot –CDevelopment
10CI-20111 yrs.MCI9 yrs.Nucleus Freedompilot-DFormative Evaluation
11CI-2037 yrs.MCI4 yrs.Nucleus 6pilot-DFormative Evaluation
12CI-20211 yrs.MCI8 yrs.Nucleus 6pretest
Training
posttest
Implementation

Table 3.

Demographic characteristics of participants of the study and phase involved.

Key: NA = not applicable; C. Age = Chronological Age; H. Age = hearing age; Nat = native; M = Malay; A = Arab.

3.4 Instrument, stimuli, and room calibration

Arabic Phonemes [/t/ ت], [/d/ د], [/k/ ك], and [/ʃ/ ش] were used in pilot study A, whereas [/ħ/ ح], [/sˁ/ ص], [/ðˁ/ ظ], and [/ʁ/ غ] were used in all other studies, that is, pilot studies B, C, and D and training study. All the phonemes were recorded and normalized for loudness-balanced. For more detail on stimuli preparation, see [6]. The conceptualized pathway shown in Figure 1 was considered in designing the software. The design included both the discrimination and identification tasks. In both tasks, determining the optimum “distance” between the stimuli in a test track was important to ensure that participants with CI could hear the stimuli as two separate inputs.

3.5 Phase I: Determination of the interstimulus intervals (ISI)

In this experiment, the discrimination task was a two-alternative forced-choice (2AFC) task where stimuli were presented in pairs in the ‘AX format. That is, two stimuli A (target sound) and stimuli X (minimal pair1) were presented with a very small ISI (300, 350, or 400 ms). The listener was required to judge whether the second stimulus presented was the same or different from the first stimulus. The same stimuli were presented in two forms, for example, /t/−/t/ and /d/−/d/, while different stimuli were presented as /t/−/d/ or /d/−/t/.

3.5.1 Preparation of the stimuli

The speech materials for pilot study A consisted of consonant-vowel (/Ca/) tokens with four Arabic phonemes/t, d, ʃ, and k/identified as familiar and better perceiving phonemes among the CI children [6]. The auditory stimuli for this experiment were further prepared by an open-source digital audio editor: Audacity version 2.1.3 for windows [77]. Two phonemes were put together in three different ISI-d (<, 500 ms), that is, 400 milliseconds (ms.), 350 ms., and 300 ms. for the discrimination task to activate bottom-up processing [48, 49, 50, 51]. Figure 3 illustrates ISI-d and ISI-r for both tasks.

Figure 3.

Inter-stimulus-interval for discrimination (ISI-d) and intra-stimulus-interval for recognition (ISI-r) presentations.

An interstimulus interval for recognition (ISI-r), that is, 4000, 3500, 3000, or 2500 ms, was applied for each track. Therefore, three pairs of phonemes were presented in each presentation block with an ISI-d, for example, 400 ms., and an ISI-r, for example, 4000 ms. That generated 12 blocks of presentations: 12 blocks × 3 pairs of consonants x 3 ISI-d = 108 tokens for each combination pair. The details are shown in Table 4.

StimuliTask (2AFC)Presentation orderNumbers of presentationISI for presentationsTokens per participant
/ʃ/−/k/DiscriminationAB, BA12 Same and 12 Different for each pairISI-d & ISI-r1296
/t/−/d/BB, AA
/t/−/k/
/ʃ/, /k/, /t/ & /d/IdentificationA & B12 presentations for each stimulusISI-r432

Table 4.

Details of experimental paradigm for pilot study a to determine the best inter-stimulus intervals for discrimination (ISI-d) and identification (ISI-r) tasks.

*2AFC = Two alternative forced choice; ISI = Inter-stimulus interval; ISI-d = Inter-stimulus interval for discrimination; ISI-r = inter-stimulus interval for response in identification task.

3.5.2 Data collection

The participant was seated comfortably and was provided with the response sheet and a pencil. The stimuli were presented via a loudspeaker, positioned at approximately 30° azimuth and 1 m from the participant. The presentation level was around 65 dB SPL in an auditory alone mode. The tasks were verbally explained to the participant in the Malay language, and written instruction was given to parents. Before collecting data for the experiment (discrimination and identification), participants were conditioned for each task. A total of 3456 tokens were collected from the two children with CI during the test presentation. The participants were tested using their CI with regularly used speech processor settings and programs.

3.5.3 Data analysis for pilot study A

The proportion of false alarms (pFA) was calculated using the equation (1). pFA is representative of response bias. The pFA was plotted on the graph against ISI-d and ISI-r for discrimination and identification tasks, respectively.

pFA=False alarmsFAnumber of stimuli presentedE1

3.5.4 Results for pilot study A

Figure 4 illustrates the proportion of false alarm (pFA) at different interstimulus intervals tested during the discrimination and identification tasks. Overall, the ISI-d 350 ms for the discrimination task showed the lowest pFA (i.e., 0.09), and therefore selected for further study. For the identification task, different ISI-r did not affect the responses as response bias (pFA) was below (0.10–0.17). Therefore, 350 ms was chosen for ISI-d and 2500 ms for ISI-r.

Figure 4.

Proportion of false alarm (pFA) at different test inter-stimulus intervals during the discrimination (ISI-d) and identification (ISI-r) tasks.

3.6 Phase II: development of the FizBil© software

Following the determination of the ISI-d and ISI-r in pilot study A, these data were used in developing a software-based perceptual module named FizBil©. In this phase II of this study, FizBil© software was designed to engage non-native deaf listeners to attend to the perceptual cues and gradually learn categorical perception of non-native phonological features. It can be used as an experimental platform for the perceptual training of any sound stimuli.

An extensive literature review was done on perceptual training software and methodology. A one-to-one meeting was carried out between the software programmer and the researcher to discuss software design. FizBil© was designed as a user-friendly platform. Figure 5 shows the logo of the software.

Figure 5.

Logo for FizBil® in PNG (A) and vector file (B).

3.6.1 Selection of phonemes for stimuli

The speech materials consist of the uncategorized-dispersed-assimilated Arabic phonemes [6] /ħ, sˁ, tˁ, ʁ/ in the consonant-vowel (a)- /Ca/ format. Each phoneme was paired with another phoneme that varies in a single phonological feature, that is, minimal pair (only one phonological component differs, wherever possible). For example, /t/ and /d/ are minimal pairs as they differ in voicing of articulation and have the same manner and place of articulation. Table 5 presents the phoneme contrasts in each category.

PhonemesPhonological featuresMinimal Pair for phonological-articulatory subcategoriesExample of stimuli
DiscriminationIdentification
MannerPlaceVoicingAuditoryVisual displayAuditoryVisual Display
[/ħ/ح]Pharyngeal, Fricative Voiceless[/q/ق][/χ/خ][/ʕ/ع]χ – χ, ħ -ħSameħ - χح & خ
χ– ħ, ħ - χdifferent
[/ʁ/ غ]Uvular Fricative Voiceless[/q/ق][/ʕ/ع][/χ/خ]ʁ - ʁ, q-qSameʁ - qق & غ
ʁ - q, q - ʁdifferent
[/tˁ/ط]Dental Plosive with Emphasis*
Voiceless
[/sˁ/ص][/q/ق][/dˁ/ض]tˁ - tˁ, dˁ -dˁSametˁ & dˁض& ط
tˁ - dˁ, dˁ - tˁdifferent
[/sˁ/ص]Alveolar Fricative with Emphasis*
Voiceless
[/tˁ/ط][/ðˁ/ظ][/dˁ/ض]sˁ - sˁ, tˁ - tˁSamesˁ & tˁط & ص
tˁ - sˁ, sˁ - tˁdifferent

Table 5.

Uncategorized-dispersed-assimilated Arabic consonants with their associated graphemes for Malay CI children and the corresponding minimal pairs used in the perceptual training.

Secondary articulatory manner.


3.6.2 Pilot study B

Pilot study B was conducted to check the program codes. Therefore, it is crucial to have listeners with minimal confusion and mature native perceptual space for all the phonological features. Test stimuli consist of emphatic sounds with secondary articulation and emphatic phonological features acquired later than nonemphatic cognates [47]. Therefore, native Arab children were invited to participate in this study. Two native Arabic-speaking NH children (aged 10 and 7 years) participated in pilot study B. Both participants (B-I and B-II) were siblings born in Egypt. Their father was working in one of the information technology-based companies, and they relocated to Kuala Lumpur two years back from the time of the study.

The data were collected via the FizBil© software. All responses from both participants were collected manually and concurrently auto-recorded on the notepad file by the software.

All the recorded responses (manually and notepad file) were compared to match the one-to-one score for the discrimination and identification tasks. The hit rate was calculated as the total correct responses.

3.6.2.1 Results

A total of 1440 (30 presentations × 3 minimal pairs for phonological features × 4 phonemes × 2 tasks × 2 participants) data tokens were collected from two Arabic-speaking participants with NH to check the program codes. The 1440 tokens were contained in a “notepad file” and coded by the researcher simultaneously. This method was adapted to check the responses in two ways.

The overall hit rate for the discrimination task was 99%, and the false alarm rate was 3% for both types of recording: notepad file and manual recording. However, for the identification task, the manual responses showed a hit rate of 98.0% with a 1% false alarm rate, while the miss rate and false alarm rate in the notepad file were 99%. These conflicts between the manual and notepad scoring suggest a coding mistake in the software writing. Therefore, the software was reverted to the software programmer and debugged accordingly. The identification test was repeated as this part of the software was debugged. Thus, another 720 tokens were recollected and reanalyzed. As a result, the overall hit rate for the identification task was 98.3%, and the false alarm rate was 2% for both notepad and manual note files. Thus, the FizBil© revised coding for the identification task was accepted.

3.7 Phase III: formative evaluation of the FizBil© software

3.7.1 Pilot study C

Pilot study C was designed to explore the interface usability for children after minimizing controlling variables such as perceptual difficulties. Unfamiliar phonemes [/ħ/, ح] were used as stimuli in this pilot study.

Five Malay NH children (age: Range = 8–10 years; Mean = 8.6 years; Standard deviation = 0.9 years) who fulfilled the inclusion criteria participated.

The tests were conducted in a quiet room at their own homes using the FizBil© software. Stimuli were presented from a loudspeaker connected to the laptop at a comfortably loud level.

All collected responses from pilot study C were compared, that is, discrimination task manual scores vs. notepad scores. In addition, a correct percentage score of the hit rate was calculated.

3.7.1.1 Results

A total of 900 (30 presentations × 3 minimal pairs for phonological feature × 1 phoneme × 2 tasks × 5 participants) tokens were collected during pilot C from five NH Malay children during the discrimination and identification tasks consonant /ħ/. The hit rate was 99.7% for the discrimination task, while the hit rate for the identification task was 97%. These relatively high performances for both tests suggested that they were relatively easy for NH children. However, due to the ceiling performances of the participants, the d-prime score (d’) was not calculated.

3.7.2 Phase III: pilot study D

Pilot study D was conducted on native Malay children with CI to examine the overall suitability of user interface and instruction.

Two CI children participated, aged 11 and 8 years, with hearing ages nine and 4 years, respectively. They used a Cochlear Nucleus N6 speech processor with the Advanced Combination Encoder (ACE) speech processing strategy and audio-verbal communication mode. Both participants attended mainstream schools.

The stimuli (phonemes /ħ, sˁ, tˁ, ʁ/: detail in Table 4) were used in this pilot study and were presented via the FizBil© software installed in window laptop. The data were collected at the Audiology clinic of UKM. Data for both listening conditions, that is, with and without feedback, were collected for 80–90 minutes a week intervals.

Overall, 2880 tokens (30 presentations × 3 minimal pairs for phonological feature x 4 phonemes × 2 tasks × 2 participants × 2 modes, i.e., with and without feedback) were collected and analyzed. The hit rate was 78%, and the false alarm rate was 22%. Therefore, the d’ scores were calculated.

The d’ score is the perceptual peak difference between two signals. Mathematically, it is the difference between the standard deviation (z-score) corresponding to the proportion of hit rate and false alarm rate. Equation (ii) was used to calculate the d’ scores as an index of perception.

d`=zpHzpFAE2

3.7.2.1 Results

The bar graph (Figure 6) illustrates the d’ scores of two CI participants for four unfamiliar Arabic phonemes in two testing conditions. In the first condition, that is, without feedback, perceptual confusions for all four unfamiliar phonemes were evident. The relatively low d’ scores for /ħ, sˁ, tˁ, ʁ/ were − 1.0, −0.5, −1.5, and 0.75, respectively. In the second testing condition, all listening stimuli were paired with feedback. A dramatic rise was detected in perceptual scores of four unfamiliar Arabic phonemes. The positive d’ scores for /ħ, sˁ, tˁ, ʁ/ were evidence. Hence, the feedback interface was user-friendly and could be used for perceptual training.

Figure 6.

Comparison of the perceptual learning scores (d′) of uncategorized-dispersed-assimilated Arabic consonants with and without feedback in both the discrimination and identification tasks.

3.8 Phase IV: perceptual training—a case study

Phase IV is a pre-post training experimental design conducted over 12 weeks. Only three out of seven CI children invited and consented by their parents to participate. However, out of these three children, one child only participated in the pretest, while the second child dropped off after the second training session. Hence, the data were from only one participant who completed all sessions.

The training design is shown in Figure 7. Baseline responses were collected during pre-training on week 1. Then, from weeks 2 to 5, categorical perceptual training with feedback was provided for manner, place, and voicing categories. The training was done once a week. In each training session, 30 stimuli of each phoneme for each phonological category were presented in discrimination and identification tasks [(30 presentations × 3 phonological features × 2 tasks) × 4 phonemes/session]. Therefore, each phoneme was heard 180 times in three different minimal pairs per training session. Moreover, 720 stimuli were presented per visit. In total, 2880 stimuli were presented to the participant in 4 weeks of training. Each training session lasted approximately 70–90 minutes (mins.), including two 5-minute forced breaks after 20–25 mins. The post-tests I and II (maintenance) data were collected at weeks six and 12.

Figure 7.

The 12-week perceptual training and test regime used in this study.

During pretest (baseline) and post-tests I and II, a total of 2160 tokens were collected. This data was analyzed according to the SDT recommendations for pretest and post-test. The d-prime (d’) scores were calculated to measure perceptual learning of phonological features’ discrimination and identification before and after the training. The negative d’ score indicates perceptual confusion. A positive value of d’ score represents the perceptual category in the perceptual space [52, 53, 54, 55].

3.8.1 Overall perceptual learning effect

Figure 8 shows the perceptual learning effect of dispersed uncategorized assimilated phonemes. Bars at the left present the child’s baseline (week 1) perceptual ability. The participant had perceptual confusion, as indicated by the negative d’ values for all the tested uncategorized assimilated Arabic phonemes. However, the d’ scores improved after 4 weeks of training and were retained after five weeks of no-training period. Positive d’ values seen at post-test I indicate a sharpening of the perceptual categories for all the uncategorized-dispersed-assimilated Arabic phonemes. The positive d’ scores remained at post-test II, suggesting the learning effect and concrete conceptualization were evidence.

Figure 8.

The overall perceptual learning effects for uncategorized-dispersed-assimilated Arabic phonemes in discrimination and identification tasks in a Malay child with CI.

3.8.2 Perceptual learning for phonological categories

The effect of training and perceptual learning was further explored by comparing the mean scores for each phonological feature at baseline, post-test I, and post-test II for posterior consonants (/ħ, ʁ/) and emphatic consonants (/sˁ, tˁ/). Results are shown in Figures 9 and 10, respectively. In general, it has been found that at baseline, the child showed perceptual confusion for all three phonological categories. However, post-training, perceptual learning occurred as indicated by the positive d’ values. At post-test II, while retention was observed for manner and voicing, the place of articulation suffered after five weeks of the no-training period. The results were similar for both discrimination and identification tasks for both posterior (see Figure 9) and emphatic consonants (see Figure 10).

Figure 9.

The perceptual learning effects based on phonological features for uncategorized-dispersed-assimilated Arabic posterior place of articulation.

Figure 10.

The perceptual learning effects based on phonological features for uncategorized-dispersed-assimilated Arabic emphatic manner of articulation.

Advertisement

4. Discussion

This study examined the perceptual learning of uncategorized-dispersed-assimilated Arabic consonants for a group of non-native children with CI using a newly developed, FizBil© bottom-up, customized software training module. The design and development of the FizBil© software, which was based on the signal detection theory, have been described in detail, involving identification and discrimination task modules. Figure 2 shows the hypothetical pathway of signal detection and non-native phonemes’ assimilation, involving top-down and bottom-up signal processing. This figure serves as the framework for this study.

4.1 Perceptual learning

Non-native perceptual difficulties are solely based on native language effects arising from the sharpening of perceptual categories [37] during language acquisition. New category learning and the sharpening of learned perceptual categories depend on bottom-up processing. Bottom-up perceptual training occurs when auditory stimuli are presented with small intervals, that is, less than a second [66, 67, 78, 79, 80, 81, 82, 83]. However, most non-native and auditory training regimes are based on identification tasks. On the other hand, a CI device provides more access to the envelope and temporal cues than the spectral information of sound signal [28, 29, 49, 84]. Therefore, discrimination training should be added alongside identification training.

It should be noted that previous studies have indicated that production is directly correlated with the perception of both native [62, 85] and non-native [36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 70, 71, 72, 73, 74, 75, 76] phonological features. However, studies on improving the perceptual peaks or perceiving distinct phonological boundaries among non-native children with CI have yet to be reported. Thus, the design, development, and evaluation of the FizBil© software described in this chapter are meant explicitly for this training purpose.

According to the signal detection theory (SDT), the perception of a phoneme is a twofold process that comprises sensory processing and decision-making [52, 53, 54, 55, 56]. SDT provides the psychophysical measure of information processing that enables listeners to distinguish between information-bearing patterns and recognize them as two separate entities [47, 86, 87]. Research studies have shown that non-native perceptual difficulties of NH adults [36, 37, 38, 39, 41, 43, 44, 45, 46, 69, 70, 71, 74, 75, 76, 79, 80] and children [88] are solely based on native language phonological repertoire. These effects arise from sharpening perceptual categories [37] during language acquisition [51, 89]. Nonetheless, for CI children, we need to consider information transmission via the cochlear implant device on top of the communication for signal processing and assimilated perception of non-native phonemes [6, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 35].

Kolb’s experiential learning theory (KELT) model defines that learning occurs with experience, and learners have different abilities to acquire knowledge or information [90, 91]. Experiential learning can only happen when teaching and instructional design is carefully chosen [92]. Kolb [93] has explained that the four-stage experiential learning cycle starts with the concrete experience, followed by reflective observation, abstract conceptualization, and actively trying out. According to KELT, learners have learning style preferences. Self-evaluation is a primary component in KELT that provides learners with feedback training. Figure 11 represents the conceptualized framework for perceptual learning of non-native phonemes and training task effects.

Figure 11.

The conceptualized framework for perceptual learning of non-native phonemes and training task effects based on the KELT model.

The perceptual training study evaluated the FizBil© software developed with specific ISI-d and ISI-r for testing children with CI based on the bottom-up information processing model. Only one child out of three volunteers who participated completed the 12-week training and test regime. Thus, the data reported was from a Malay CI child with 2160 tokens collected. While this limits the researcher from generalizing the finding, the results from one participant are still worth to discuss as the tokens collected were huge. However, given that the main aim of phase IV was to implement the FizBil© software, it was evident that the newly developed training software was valuable and could be used for training purposes. As evidenced by the findings, the perception of confusing Arabic phonemes could be sharpened into the three phonological categories following training. Results are discussed below based on the KELT theoretical framework.

KELT defines the theoretical and practical elements for a learner-centered approach [90, 91, 92]. It is a cycle of four components: reflecting, thinking, acting, and experiencing [90, 92, 94]. Learning is a multilayered process that includes multisensory information processing. Processing occurs in two ways, that is, reflective observation and active experimentation. On the other hand, perception occurs with concrete experience and abstract conceptualization. The feedback during the training in the present study provided active experimentation (discrimination and identification in minimal pairs) and built the abstract conceptualization in the perceptual space with substantial experience [95, 96, 97].

At baseline, within the Malay perceptual space of the CI child, there was the no-category formation of phonological features for posterior unfamiliar (Figure 9 baseline) and emphatic (Figure 10 baseline) Arabic phonemes. The d’ scores showed negative values. However, four weeks of bottom-up training resulted in absolute manner, place, and voicing of articulation category formation for posterior and emphatic unfamiliar Arabic phonemes (see Figures 9 and 10). It could be observed that the learning effects for manner and voicing of articulation features were more prominent than the place of articulation.

In speech signals, the envelope cues convey the voicing and manner features, that is, the slow periodic waves [88]. CI processing deficit in spectral cues transmission could explain why CI users struggle with place feature learning [30, 33, 34]. Spectral cues processing insufficiency is partly due to the limitation in their electrode discrimination abilities [28, 29, 30, 31, 84, 98]. All the factors mentioned above affect the perception of the place of articulation among CI children. That might be part of the reason for the place feature category decline after five weeks of the no-training period, which probably requires a little more time to sharpen the perceptual peak. In other words, the overall d’ scores decline in post-test II (Figures 9 and 10) could be due to place feature learning effect decay. However, the findings showed that the learning effects of manner and voice articulation were sustained after the long five weeks of off-training, as indicated by the blue and green bars in Figures 9 and 10. In post-test II, the identification of phonological features was better than discrimination. Discrimination task is exclusively based on short-term memory and carried out auditorily [33, 81, 82, 83, 98, 99]. In a short time (100–500 ms), one has to discriminate the two sounds [49, 53, 56, 61, 92, 99, 100, 101]. This discrepancy might be due to information transmission difficulties using the cochlear implant hearing device [30, 35, 98].

Advertisement

5. Conclusion

In this study, the perceptual learning effects were evidenced using the bottom-up training and led to category formation in the perceptual space of the non-native child with CI. Phonological categories breakdown analyses revealed that manner and voicing feature category formation emerged after four weeks of training and was retained after five weeks of off-training. In contrast, place feature formation, observed after four weeks of training in both discrimination and identification tasks, declined after five weeks of the off-training period, suggesting more extended and ongoing training might be needed for the perception of place features. Further analyses revealed that the perceptual learning effects of the posterior place of articulation were less than the emphatic manner of articulation. The learning effects of emphatic manner were retained after five weeks of off-training, whereas the posterior place perceptual learning effect noticeably declined. In conclusion, the perception of uncategorized-dispersed-assimilated Arabic consonants could be refined within the Malay perceptual space by bottom-up training in congenitally deaf, non-native CI children.

References

  1. 1. Geers AE, Niparko JK, Geers AE. Speech, language, and reading skills after early cochlear implantation. Journal of the American Medical Association. 2004;291(19):2378-2380
  2. 2. Geers AE, Nicholas JG, Sedey AL. Language skills of children with early Cochlear implantation. Ear and Hearing. 2003;24:46-58
  3. 3. Geers AE, Nicholas J, Tobey E, Davidsonb L. Language emergence in early-implanted children. Journal of Speech, Language, and Hearing Research. 2016;59(2):155-170
  4. 4. Geers A, Brenner C, Nicholas J, Uchanski R, Tye-Murray N, Tobey E. Rehabilitation factors contributing to implant benefit in children. The Annals of Otology, Rhinology, and Laryngology. 2002;111:127-130
  5. 5. Goh B, Fadzilah N, Abdullah A, Fathi B, Umat C. Long-term outcomes of Universiti Kebangsaan Malaysia Cochlear implant program among pediatric implantees. International Journal of Pediatric Otorhinolaryngology. 2018;105(September 2017):27-32
  6. 6. Anis FN, Umat C, Ahmad K, Hamid BA. Patterns of recognition of Arabic consonants by non-native children with cochlear implants and normal hearing. Cochlear Implants International. 2019;20(1):1-11
  7. 7. Anis FN, Umat C, Ahmad K, Hamid BA. Arabic place feature confusions among severely and profoundly hearing impaired children with a cochlear implant. In: 4th International Conference on Social Sciences Research 2016 (ICSSR 2016). 2016. pp. 207-216
  8. 8. Bunta FC, Elizabeth G-M, Procter A, Hernandeza A. Initial stop voicing in bilingual children with Cochlear implants and their typically developing peers with Normal hearing. J speech, Lang. Hearing Research. 2016;59(August):686-698
  9. 9. Delcenserie A, Genesee F, Trudeau N, Champoux F. The development of phonological memory and language: A multiple groups approach. Journal of Child Language. 2020;48(2):285-324
  10. 10. Nassif N, Barezzani MG, Oscar L, De ZR. Delayed speech perception and production after Cochlear implantation in bilingual children from non-native families. Journal of Otorhinolaryngology, Hearing and Balance Medicine. 2021;2(1):4
  11. 11. Sosa AV, Bunta F. Speech production accuracy and variability in monolingual and bilingual children with cochlear implants: A comparison to their peers with normal hearing. Journal of Speech, Language, and Hearing Research. 2019;62(8):2601-2616
  12. 12. Keilmann A, Friese B, Hoffmann V. Receptive and productive speech and language abilities in hearing-impaired children with German as a second language. International Journal of Pediatric Otorhinolaryngology. 2019;120:100-107
  13. 13. Rødvik AK, Tvete O, von Torkildsen JK, OBØ W, Skaug I, Silvola JT. Consonant and vowel confusions in well-performing children and adolescents with cochlear implants, measured by a nonsense syllable repetition test. Frontiers in Psychology. 2019;10(AUG):1-17
  14. 14. Thomas E, El-Kashlan H, Zwolan TA. Children with cochlear implants who live in monolingual and bilingual homes. Otology & Neurotology. 2008 Feb;29(2):230-234
  15. 15. Waltzman SB, Robbins AMC, Green JE, Cohen NL. Second oral language capabilities in children with cochlear implants. Otology and Neurotology. 2003;24(5):757-763
  16. 16. Mahon M, Vickers D, McCarthy K, Barker R, Merritt R, Szagun G, et al. Cochlear-implanted children from homes where english is an additional language: Findings from a recent audit in one London Centre. Cochlear Implants International. 2011;12(2):105-113
  17. 17. McKee RL. The construction of deaf children as marginal bilinguals in the mainstream. International Journal of Bilingual Education and Bilingualism. 2008;11(5):519-540
  18. 18. Bunta F, Douglas M. The effects of dual-language support on the language skills of bilingual children with hearing loss who use listening devices relative to their monolingual peers. Language, Speech, and Hearing Services in Schools. 2013;44(3):281-290
  19. 19. Bunta F, Douglas M, Dickson H, Cantu A, Wickesberg J, Gifford RH. Dual language versus English only support for bilingual children with hearing loss who use cochlear implants and hearing aids. International Journal of Language & Communication Disorders. 2016;51(4):460-472
  20. 20. Sabri M, Fabiano-Smith L. Phonological development in a bilingual Arabic–english-speaking child with bilateral cochlear implants: A longitudinal case study. American Journal of Speech-Language Pathology. 2018;27(4):1506-1522
  21. 21. Forli F, Giuntini G, Ciabotti A, Bruschini L, Löfkvist U, Berrettini S. How does a bilingual environment affect the results in children with cochlear implants compared to monolingual-matched children? An Italian follow-up study. International Journal of Pediatric Otorhinolaryngology. 2018;2018(105):56-62
  22. 22. Deriaz M, Pelizzone M, Fornos AP. Simultaneous development of 2 oral languages by child cochlear implant recipients. Otology & Neurotology. 2014;35(9):1541-1544
  23. 23. Abdul-Kadir NA, Sudirman R. Difficulties of standard Arabic phonemes spoken by non-Arab primary school children based on formant frequencies. Journal of Computer Science. 2011;7(7):1003-1010
  24. 24. Azraai H. Acoustic analysis of Al-Quran phonemes by children with cochlear implant. In: poster. Malaysia: UNIVERSITI KEBANGSAAN MALAYSIA; 2011
  25. 25. Teschendorf M, Arweiler-Harbeck D, Bagus H. Speech development after cochlear implantation in children with bilingual parents. Cochlear Implants International. 2010;11(Suppl 1):386-389
  26. 26. Teschendorf M, Janeschik S, Bagus H, Lang S, Arweiler-Harbeck D. Speech development after cochlear implantation in children from bilingual homes. Cochlear Implants International. 2011;32(2):229-235
  27. 27. Clark G. In: Beyer RT, editor. Cochlear Implants: Fundamentals and Application [Internet]. New York, NY: Springer; 2003. 243 p
  28. 28. Shannon RV, Fu Q-J, Galvin J. The number of spectral channels required for speech recognition depends on the difficulty of the listening situation. Acta oto-laryngologica Suppl. 2004;552:50-54
  29. 29. Shannon RV, Fu Q-J, Galvin J, Friesen L. Speech perception with Cochlear implants. In: Cochlear Implants: Auditory Prostheses and Electric Hearing SE - 8. 2004. pp. 334-376
  30. 30. Verschuur C. Acoustic Model of Consonant Recognition in Cochlear Implant Users [thesis]. England: University of Southampton; 2007
  31. 31. Zhou N, Xu L, Lee C-Y. The effects of frequency-place shift on consonant confusion in cochlear implant simulations. Journal of the Acoustical Society of America. 2010;128(1):401-409
  32. 32. Bouton S, Colé P, Serniclaes W. The influence of lexical knowledge on phoneme discrimination in deaf children with cochlear implants. Speech Communication. 2012;54(2):189-198
  33. 33. Moreno-Torres I, Moruno-López E, Ignacio M-T, Moruno-López E. Segmental and suprasegmental errors in Spanish learning cochlear implant users: Neurolinguistic interpretation. Journal of Neurolinguistics. 2014;31:1-16
  34. 34. Bouton S, Serniclaes W, Bertoncini J, Cole P. Perception of speech features by French-speaking children with Cochlear implants. Journal of Speech, Language, and Hearing Research. 2012;55(1):139-153
  35. 35. Nittrouer S, Caldwell-Tarr A, Moberly AC, Lowenstein JH. Perceptual weighting strategies of children with cochlear implants and normal hearing. Pediatric Cochlear Implantation. 2014;52:111-133
  36. 36. Best CT, Strange W. Effects of phonological and phonetic factors on cross language perception of approximants. Journal of Phonetics. 1992;110:89-108
  37. 37. Best CT, Goldstein LM, Nam H, Tyler MD, Best CT, Goldstein LM, et al. Articulating what infants attune to in native speech. Ecological Psychology. 2016;28(4):216-261
  38. 38. Evans BG, Alshangiti W. The perception and production of British English vowels and consonants by Arabic learners of English. Journal of Phonetics. 2018;68:15-31
  39. 39. Faris MM, Best CT, Tyler MD. An examination of the different ways that non-native phones may be perceptually assimilated as uncategorized. Journal of the Acoustical Society of America. 2016;139(January):1-5
  40. 40. Faris MM, Best CT, Tyler MD. Discrimination of uncategorised non-native vowel contrasts is modulated by perceived overlap with native phonological categories. Journal of Phonetics. 2018;70(July):1-19
  41. 41. Dommelen WA, Hazan V. Perception of English consonants in noise by native and Norwegian listeners. Speech Communication. 2010;52(11–12):968-979
  42. 42. Brown CA. The role of the L1 grammar in the L2 acquisition of segmental structure. Second Language Research. 1998;14(2):136-193
  43. 43. Best CT, Tyler MD. Nonnative and second language speech perception. Commonalities and complementarities, in second language speech learning: The role of language experience in speech perception and production. In: Munro MJ, Bohn OS, editors. Language Experience in Second Language Speech Learning. Amsterdam: John Benjamins; 2007. pp. 13-34
  44. 44. Best CT, Goldstein L, Tyler MD, Nam H. Articulating the perceptual assimilation model (PAM): Perceptual assimilation in relation to articulatory organs and their constriction gestures. The Journal of the Acoustical Society of America. 2009;125:2758
  45. 45. Best CT, Hallé P, a. Perception of initial obstruent voicing is influenced by gestural organization. Journal of Phonetics. 2010;38(1):109-126
  46. 46. Georgiou GP, Perfilieva NV, Denisenko VN, Novospasskaya NV. Perceptual realization of Greek consonants by Russian monolingual speakers. Speech Communication. 2020;125(September):7-14
  47. 47. Silbert NH. Perception of voicing and place of articulation in labial and alveolar English stop consonants. Laboratory Phonology. 2014;5(2):289-335
  48. 48. Lo CY, McMahon CM, Looi V, Thompson WF. Melodic contour training and its effect on speech in noise, consonant discrimination, and prosody perception for Cochlear implant recipients. Behavioural Neurology. 2015. pp. 10. [Article No: 352869]
  49. 49. Gerrits E, Schouten MEH. Categorical perception depends on the discrimination task. Perception & Psychophysics. 2004;66(3):363-376
  50. 50. Wang X, Humes LE. Classification and cue weighting of multidimensional stimuli with speech-like cues for young normal hearing and elderly hearing-impaired listeners. Ear Hear. 2008;29(5):725-745
  51. 51. Pajak B, Levy R. The role of abstraction in non-native speech perception. Journal of Phonetics. 2014;46(1):147-160
  52. 52. Harvey LO, Parker SM. Detection theory: Sensory and decision processes. In: Psychology of Perception. Boulder: Colorado University. 2014
  53. 53. Macmillan NA, Creelmen CD. Detection Theory: A user's Guide. Second ed. New York, NY: Psychology Press; 2005
  54. 54. Georgeson M. Postgraduate research methods course. In: Sensitivity and Bias - an Introduction to Signal Detection Theory. UK: University of Birmingham; 2005
  55. 55. Stanislaw H, Todorov N. Calculation of signal detection theory measures. Behavior Research Methods, Instruments, & Computers. 1999;31(1):137-149
  56. 56. Ashby F, Soto F. Multidimensional signal detection theory. In: Townsend Z, Wang AE, editors. Oxford Handbook of Computational and Mathematical Psychology. UK: Oxford University Press; 2012
  57. 57. Handley Z, Moore D. Training novel phonemic contrasts: A comparison of identification and oddity discrimination training. Proc SLaTE. 2009;2:3-6
  58. 58. Huyse A, Berthommier F, Leybaert J. Degradation of labial information modifies audiovisual speech perception in cochlear-implanted children. Ear and Hearing. 2012;34(1):110-121
  59. 59. Jaekel BN, Newman RS, Goupell MJ. Speech rate normalization and phonemic boundary perception in cochlear-implant users. Journal of Speech, Language, and Hearing Research. 2017;60(5):1398-1416
  60. 60. Lesniak A, Myers L, Dodd B. The English phonological awareness skills of 5;0-6;0-year-old polish–English, Portuguese–English bilingual speakers and English monolingual children. Speech, Language and Hearing. 2014;17(1):37-48
  61. 61. Moore DR, Rosenberg JF, Coleman JS. Discrimination training of phonemic contrasts enhances phonological processing in mainstream school children. Brain and Language. 2005;94(1):72-85
  62. 62. Holt LL, Lotto AJ. Speech perception as categorization. Attention, Perception, Psychophys. 2010;72(5):1218-1227
  63. 63. Heeren WFL. Perceptual development of phoneme contrasts in adults and children [Internet]. 2006. Available from: http://igitur-archive.library.uu.nl/dissertations/2006-0712-200026/UUindex.html.
  64. 64. Serniclaes W, Sprenger-Charolles L. Categorical perception of speech sounds and dyslexia. Current Psychology Letters. 2003;10(1):1-9
  65. 65. Reed MA. Speech perception and the discrimination of brief auditory cues in Reading disabled children. Journal of Experimental Child Psychology. 1989;48:270-292
  66. 66. Pisoni DB, Kronenberger WG, Chandramouli SH, Conway CM. Learning and memory processes following cochlear implantation: The missing piece of the puzzle. Frontiers in Psychology. 2016;7(APR):1-19
  67. 67. Pisoni DD, Geers AE. Working memory in deaf children with cochlear implants: Correlations between digit span and measures of spoken language processing. The Annals of Otology, Rhinology & Laryngology. Supplement. 2000;185(December):89-92
  68. 68. APA. What is Learned in Perceptual Learning? [Internet]. American Psychological Association. 2013. Available from: https://www.apa.org/pubs/highlights/peeps/issue-04.
  69. 69. Best CT, McRoberts GW, Goodell E. Discrimination of non-native consonant contrasts varying in perceptual assimilation to the listener’s native phonological system. The Journal of the Acoustical Society of America. 2001;109(2):775-794
  70. 70. Flege JE. Phonetic approximation in second language acquisition. Language Learning. 1980;30(1):117-134
  71. 71. Flege JE, Munro MJ, Mackay IR, a. Effects of age of second-language learning on the production of English consonants. Speech Communication. 1995;16(1):1-26
  72. 72. Polka L. Cross-language speech perception in adults: Phonemic, phonetic, and acoustic contributions. Journal of the Acoustical Society of America. 1991;89(6):2961-2977
  73. 73. Werker JFF, Lalonde CEE. Cross-language speech perception: Initial capabilities and developmental change. Developmental Psychology. 1988;24(5):672-683
  74. 74. Ritchie WC. On the explanation of phonic interference. Language Learn. 1968;18(3–4):183-197
  75. 75. Werker JF, Tees RC. Cross-language speech perception: Evidence for perceptual reorganization during the first year of life. Infant Behavior & Development. 2002;25(1):121-133
  76. 76. Hancin-Bhatt BJ. Phonological Transfer in Seecond Language Perception and Production [thesis]. USA: University of Illinois; 1994
  77. 77. Mazzoni D, Dannenberg R. Audacity Credit. Addacity team; 2000
  78. 78. Bradlow AR, Akahane-yamada R. Training Japanese listeners to identify English /r/ and /l/: Long term retention of learning in perception and production. Perception & Psychophysics. 1999;61(5):977-985
  79. 79. Bradlow AR, Pisoni DB. Some effects of perceptual learning on speech production. The Journal of the Acoustical Society of America. 1997;101(4):2299-2310
  80. 80. Pisoni DB, Remez RE. The Handbook of Speech Perception. Oxford, UK: Blackwell Publishing; 2006
  81. 81. Pisoni DB. Auditory short-term memory and vowel perception. Memory and Cognition. 1975;6(8):1-25
  82. 82. Pisoni DB. Auditory and phonetic memory codes in the discrimination of consonants and vowels. Perception & Psychophysics. 1973;13(1):1-23
  83. 83. Pisoni DB, Kronenberger WG, Harris MS, Moberly AC. Three challenges for future research on cochlear implants. The World Journal of Otorhinolaryngology – Head & Neck Surgery. 2017;12(10):1-15
  84. 84. Li F. Perceptual Cues of Consonant Sound and Impact of Sensorineural Hearing Loss on Speech Perception. USA: University of Illinois; 2009
  85. 85. Casserly ED, Pisoni DB. Speech perception and production. Wiley Interdisciplinary Reviews: Cognitive Science. 2010;1(5):629
  86. 86. Abdi H. Signal detection theory. In: International Encyclopedia of Education. Vol. March. 2010
  87. 87. Stoehr A, Benders T, van Hell JG, Fikkert P. Bilingual Preschoolers' speech is associated with non-native maternal language input. Language Learning and Development. 2019;15(1):75-100
  88. 88. Driscoll VD, Oleson J, Dingfeng Jiang A, Gfeller K. Effects of training on recognition of musical instruments presented through Cochlear implant simulations. Journal of the American Academy of Audiology. 2009;20(1):71-82
  89. 89. Melguy YV. Exploring the bilingual phonological space: Early Bilinguals' discrimination of coronal stop contrasts. Language and Speech. 2017;61(2):173-198
  90. 90. Kang DY, Martin SN. Improving learning opportunities for special education needs (SEN) students by engaging pre-service science teachers in an informal experiential learning course. Asia Pacific Journal of Education. 2018;38(3):319-347
  91. 91. Kolb DA, Boyatzis RE, Mainemelis C. Experiential learning theory: Previous research and new directions. Perspect Thinking, Learning and Cognitive Styles. 2000;216:227-247
  92. 92. Wright BA, Zhang Y. A review of the generalization of auditory learning. Philosophical Transactions of the Royal Society B. 2009;364(1515):301-311
  93. 93. Kolb D. In: NPH EC, editor. Experiential Learning: Experience as the Source of Learning and Development. Prentice-Hall; 1984
  94. 94. Wright J. Experiential Learning: An Overview. Institute for Teaching and Learning Innovations. Brisbane: University of Queensland; 2015. p. 8
  95. 95. Miller V. Learn, Try, Repeat: Experiential Learning in Adult Second Language Acquisition of Spanish in Higher Education [thesis]. USA: University of Nebraska – Lincoln; 2021
  96. 96. Reimer CK. The Effect of Retrieval Practice on Vocabulary Learning for Children Who Are Deaf or Hard of Hearing. USA: Washington University; 2019
  97. 97. Baker MA. The Effect of Kolb's Experiential Learning Model on Successful Secondary Student Intelligence and Student Motivation [thesis]. Oklahoma: Oklahoma State University; 2012
  98. 98. Shannon RV, Cruz RJ, Galvin JJ, Galvin JJ 3rd. Effect of stimulation rate on Cochlear implant Users' phoneme , word and sentence recognition in quiet and in noise. Audiology and Neurotology. 2011;16:113-123
  99. 99. Watson DR, Titterington J, Henry A, Toner JG. Auditory sensory memory and working memory processes in children with normal hearing and cochlear implants. Audiology and Neurotology. 2007;12(2):65-76
  100. 100. Zhang YX, Moore DR, Guiraud J, Molloy K, Yan TT, Amitay S. Auditory discrimination learning: Role of working memory. PLoS One. 2016;11(1):1-18
  101. 101. Kasisopa B, Antonios LEK, Jongman A, Sereno JA, Burnham D. Training children to perceive non-native lexical tones: Tone language background, bilingualism, and auditory-visual information. Frontiers in Psychology. 2018;9(9):1508

Notes

  • In phonology a minimal pair of sound or phonemes differs in only one phonological feature, for example, /t/ & /d/ have same manner and place of articulation but differs only in voicing of articulation.

Written By

Farheen Naz Anis and Cila Umat

Submitted: 26 December 2022 Reviewed: 07 March 2023 Published: 10 June 2023