Research Article - (2015) Volume 4, Issue 5

Auditory Processing of Speech and Non-Speech Stimuli in Children who Stutter: Electrophysiological Evidences

Isabela Crivellaro Gonçalves*, Claudia Regina Furquim De Andrade and Carla Gentile Matas
Faculty of Medicine, Department of Physiotherapy, Communication Science & Disorders, Occupational Therapy, University of Sao Paulo, São Paulo, Brazil
*Corresponding Author: Isabela Crivellaro Gonçalves, Faculty of Medicine, Department of Physiotherapy, Communication Science & Disorders, Occupational Therapy, University of Sao Paulo, São Paulo-05360-000, Brazil, Tel: 551130919411 Email:

Abstract

Objectives: Current scientific evidence supports the hypothesis that people who stutter have anomalous connections in auditory regions of the left hemisphere. Thus, it is reasonable to suppose that abnormal results in auditory evoked potentials may be related to this type of disorder. In the present study, Auditory Brainstem Responses (ABR) using stimuli of different complexities were recorded in order to investigate possible neural synchrony deficits in children who stutter (CWS).

Methods: Ten CWS aged between seven and 11 years and their non-stuttering peers (CWNS) underwent electrophysiological (speech- and click-evoked ABR) assessment.

Results: CWS showed greater variability in latency values, as well as a statistical trend towards significance regarding differences between right and left ears for the interpeak I-III in the click-evoked ABR. In the speech-evoked ABR, the latency values of wave C and the amplitude of VA complex were significantly higher in CWS.

Conclusions: The results suggest that CWS present differences in neural processes related to the processing of acoustic information, when compared to typically developing children, especially when more complex stimuli, such as speech, are considered.

Keywords: Stuttering; Evoked potentials; Auditory brain stem; Auditory perception; Hearing tests; Children

Abbreviations

ABR: Auditory Brainstem Response; CWS: Children Who Stutter; CWNS: children Who did Not Stutter; AEP: Auditory Evoked Potentials; SSI: Stuttering Severity Instrument; IHS: Intelligent Hearing System; IES: International Electrode System; FFR: Frequency Following Response; ANOVA: Analysis of Variance

Introduction

Stuttering is a multidimensional speech fluency disorder, and despite decades of intensive research, its cause and biological underpinnings are not completely understood. Furthermore, the fluency-disabled population is heterogeneous, suggesting that this disorder results from multiple interacting physiological processes reflecting more than one underlying cause. Current scientific evidence supports the hypothesis that people who stutter have anomalous connections in auditory regions of the left hemisphere, related to the sensory feedback of speech sounds. Thus, it is reasonable to suppose that deficits in neural synchrony might be correlated to this type of disorder [1,2].

Auditory evoked potentials (AEP) are used to assess auditory processing in children with language and speech impairments and learning disabilities [3,4]. However, studies concerning specifically the use of AEP in individuals who stutter are scarce and most of them are limited to adults who stutter [5-15]. The most widely used AEP in clinical practice is the auditory brainstem response (ABR). ABRs can be recorded using different stimuli [16,17] and, although the click stimulus is the most frequently used, additional methods involving speech stimuli have been developed [4,18,19] (Figure 1). Speech-evoked ABRs can be used to examine the neural basis of the auditory function due to its remarkably faithful representation of the stimulus’ acoustics [16,20- 25]. Furthermore, due to the complex interaction between sensory and cognitive functions that likely occurs in impaired auditory processing, auditory brainstem measures may be particularly useful in revealing the biological correlates of communication [26] (Figure 2). Recording of ABRs to stimuli of different complexities represents in humans a physiological approach to study the neural activity that underlies the encoding of speech and non-speech sounds, and have not been performed in children who stutter. Our hypothesis was that children who stutter have basic differences in how their brains encode verbal and nonverbal acoustic signals at the brainstem level, when compared to their normal peers.

brain-disorders-therapy-Representative-trace-evoked

Figure 1: Representative trace – Click-evoked ABR.

brain-disorders-therapy-Representative-trace-evoked

Figure 2: Representative trace – Speech-evoked ABR.

Thus, the purpose of the present study was to investigate whether or not neurophysiological ABR to non-speech (clicks) and repeated speech stimuli (syllable /da/) differ between children who do and who do not stutter. Differences between these groups in click-evoked ABR results would signal disruption in the encoding of sound more peripherally in the auditory pathway, and differences in speech-evoked ABR results would indicate abnormal processing related to spectral and temporal information at higher levels of the auditory pathway.

Materials and Methods

Participants

The subjects were 20 native Brazilian-Portuguese speaking children (14 boys and six girls), with ages ranging from seven to 11 years, divided into two groups: G1 – 10 children who stuttered (CWS – mean age 10.1 years); G2 – 10 children who did not stutter (CWNS – mean age 10.3 years). Children from G1 (CWS) were diagnosed prior to inclusion in this study by specialized speech-language pathologists. The diagnostic process was performed external to, and independently from the current study. Children from both groups were matched for age and gender (seven boys and three girls in each group) and had normal bilateral hearing (pure tone thresholds ≤20 dB HL for octaves 250-8000 Hz) and normal middle ear function. They were selected based on the following criteria: having no complaints of language and speech disorders, writing, reading or learning disorders, no history of neurological or psychiatrics conditions and drugs dependence, no history of otological diseases, seizures, head trauma or presented use of any medication. In accordance with the approval of this research by the Ethics Committee of the Institution (CAPPesq HC FMUSP 1321/09), an informed consent was obtained from participants’ legal guardians.

In order to determine the distribution of the participants between groups, the following inclusion criteria were used:

(α) G1 – to present at least 18 points (diagnosed with at least “mild” stuttering) in the Stuttering Severity Instrument (SSI-3) [27]; to present fluency profile outside the age reference values [28]. Seven of the CWS presented mild, one moderate, and one severe stuttering level in the SSI-3.

(β) G2 – to present the maximum of 9 points (indicative of normal fluency) in the SSI-3 [27]; to present fluency profile within the age reference values [28].

Speech samples and disfluency analyses

A speech sample of 200 fluent syllables of conversational speech, based on an everyday life situation with an unfamiliar listener, was videotaped and analyzed for each participant. Samples were collected and analyzed according to the Fluency Profile Protocol [29] – typical speech disfluencies, atypical speech disfluencies, words per minute, syllables per minute, and percentage of stuttered syllables. In addition, the Stuttering Severity Instrument - SSI [27] was used to estimate the stuttering severity level.

Auditory Brainstem Response (ABR) test protocol

Children were tested in a sound-treated room. The electrophysiological evaluation consisted of recording click- and speech-evoked ABR using silver electrodes and a PC-based delivery system (IHS), that controlled the timing and the intensity of stimulus delivery to the ears through insert earphones (Etymotic Research ER- 3). Initially, the skin of each individual was cleansed with abrasive paste and the electrodes were attached with electrolytic paste and adhesive tape, and positioned according to the International Electrode System (IES) 10-20. All electrodes’ impedance was kept at 5 kOhms or less. ABR recordings were carried out using the click stimulus (rarefaction polarity, duration of 100 μs), presented separately to both ears at 80 dBnHL and at a stimulus rate of 19.1 clicks/sec. Responses were filtered online from 100 to 3,000 Hz and recorded over a 12 msec post-stimulation period. Two thousand and forty-eight repetitions were collected with an amplification of 100,000. Trials with artifacts exceeding ±25 μV were rejected from the averaged response. Peaks were selected and their absolute latencies (waves I, III and V) and interpeak latencies (I-III, III-V and I-V) were calculated. The speechevoked ABRs (alternating polarity, duration of 40,000 μs) were elicited by the formant transition portion of the speech syllable /da/, delivered monoaurally to the right ear at 80 dBnHL and at a stimulus rate of 11.1/ sec. This syllable was chosen because stop consonants have considerable phonetic information thus providing robust and reliable traces. Responses were recorded over a 64 ms post-stimulation time period and filtered online from 100 to 3,000 Hz. Three thousand repetitions (in three different tracings) were collected with an amplification of 100,000. Trials with artifacts exceeding ±20 μV were rejected from the averaged response. Each child’s final response was a 3,000 stimuli repetition, which were averaged separately (1,000 each) and summed to create a mainly neural response representing brainstem activity. The response to the onset of consonant-vowel syllable includes a positive peak (wave V) followed immediately by a negative trough (wave A). Following the onset response, peaks C and F are present in the Frequency Following Response (FFR) [16]. Although other peaks are discernable in this region, peaks C and F were shown to be the most reliable waveform peaks in typically developing children [3].

Two experienced observers manually selected waves’ I, III, and V peaks for the click-evoked ABRs, and wave’s V, A, C and F peaks for the speech-evoked ABRs. Measures of timing were used to assess the peaks. The VA complex was further investigated by measuring its interpeak interval and amplitude. The mean and standard deviation (SD) were calculated for both the click-evoked ABR (peaks and interpeak latencies) and the speech-evoked ABR (peaks’ latencies; latency and amplitude of VA complex). The response measures were compared between G1 and G2.

Statistical analysis

Repeated ANOVA measures were used for statistical analysis of the latency measurements of the click-evoked ABR. The student´s t-test was used for statistical analysis of the latency and amplitude measurements of the speech-evoked ABR. The software used for this analysis (SPSS version 18) provides an alternative test statistic (Satterthwaite) when the variances equality test indicates that the variances in the two groups are different. It provides a t statistic that asymptotically (that is, as the sample sizes become large) approaches a t distribution, allowing for an approximate t test to be calculated when the population variances are not equal. The differences between the click and speech sound encoding results in the G1 and G2 were considered significant when p ≤ 0.05.

Results

Click-evoked ABR

There were no significant differences between the groups regarding the latency values of waves I, III and V and I-III, III-V and IV interpeaks (Tables 1-3).

Peak Ear Group N Mean SD Minimum Median Maximum
I RE G2 10 1.56 0.07 1.45 1.56 1.65
G1 10 1.52 0.14 1.23 1.52 1.75
LE G2 10 1.58 0.13 1.40 1.57 1.75
G1 10 1.54 0.12 1.30 1.55 1.73
  III RE G2 10 3.80 0.10 3.60 3.81 3.95
G1 10 3.83 0.08 3.73 3.84 3.95
LE G2 10 3.80 0.11 3.52 3.85 3.88
G1 10 3.76 0.06 3.70 3.76 3.85
V RE G2 10 5.59 0.11 5.35 5.62 5.68
G1 10 5.55 0.19 5.22 5.53 5.80
LE G2 10 5.60 0.10 5.50 5.60 5.80
G1 10 5.56 0.09 5.50 5.54 5.70
Note: RE: Right Ear; LE: Left Ear; SD: Standard Deviation; N: Number of Individuals.

Table 1: Latency values for discrete peak responses collected in G1 and G2 – Click-evoked ABR.

Interpeak Ear Group N Mean SD Minimum Median Maximum
I-III RE G2 10 2.23 0.08 2.03 2.24 2.32
G1 10 2.31 0.16 2.02 2.34 2.52
LE G2 10 2.23 0.12 2.05 2.18 2.43
G1 10 2.23 0.13 2.05 2.23 2.53
III-V RE G2 10 1.82 0.11 1.57 1.84 1.95
G1 10 1.73 0.21 1.45 1.74 2.07
LE G2 10 1.82 0.10 1.65 1.84 1.97
G1 10 1.80 0.10 1.65 1.78 1.98
I-V RE G2 10 4.05 0.12 3.80 4.08 4.20
G1 10 4.04 0.17 3.82 4.03 4.37
LE G2 10 4.04 0.08 3.93 4.01 4.20
G1 10 4.02 0.16 3.77 4.05 4.28
Note: RE: Right Ear; LE: Left Ear; SD: Standard Deviation; N: Number of Individuals.

Table 2: Interpeaks’ latency values for responses collected in G1 and G2 – Click evoked ABR.

Peak G1 versus G2 LE versus RE Group versus Ear
I 0.42 0.28 0.96
III 0.85 0.14 0.14
V 0.40 0.75 0.94
I-III 0.44 0.06 0.08
III-V 0.31 0.24 0.31
I-V 0.82 0.73 0.95
Note: RE: Right Ear; LE: Left Ear

Table 3: P-values (Repeated measures ANOVA) – Click evoked ABR.

Speech-evoked ABR

The latency and amplitude values of the speech-evoked ABRs for both groups are displayed in Table 4. Statistical analyses indicated that the variability of the wave C latency was higher in G1 than in G2, as well as the values of both VA complex variables (amplitude and latency). We observed a significant difference between groups for the latency values of wave C and the VA complex amplitude (Tables 4 and 5). The mean values presented by G2 were greater than those observed in G1, for both measures.

Peak Group N Mean SD Minimum Median Maximum
V (msec) G2 10 6..93 0,62 6.0 6.9 8.0
G1 10 6.54 0,58 5.0 6.6 7.0
A (msec) G2 10 8.58 0,74 7.5 8.4 9.9
G1 10 8.33 1,19 6.1 8.0 9.9
C (msec) G2 10 17.14 1,24 15.1 17.3 18.9
G1 10 20.32 2,33 17.9 19.1 25.0
F (msec) G2 10 40.78 1,43 39.3 40.5 43.6
G1 10 40.10 0,79 39.0 39.9 41.5
VA (msec) G2 10 1.65 0,46 1.1 1.6 2.5
G1 10 1.79 0,83 0.6 1.8 2.8
VA (µV) G2 10 0.43 0,19 0.2 0.4 0.8
G1 10 1.31 0,58 0.5 1.2 2.6
Note: SD: Standard Deviation; N: Number of Individuals.

Table 4: Latency and amplitude values for discrete peak responses collected in G1 and G2 – Speech-evoked ABR.

Peaks Equality of variances Equality of means
V 0.97 0.17
A 0.13 0.58
C 0.05* 0.002*
F 0.12 0,21
VA (latency) 0.02* 0.65
VA (amplitude) 0.05* 0.001*
*statistically significant p-value

Table 5: P-values (student’s test) – Speech-evoked ABR.

Discussion and Conclusion

Several studies have been developed using different audiological procedures in order to investigate auditory processing in stutterers. However, the assessment batteries generally rely on behavioral measures which can be impacted by subject factors and co-occuring disorders [30]. The identification of abnormalities in the encoding of temporal and spectral information is fundamental for accuracy in sound perception, and it is considered extremely important in speech, language and learning disorders. Such findings allow the understanding of neurophysiological mechanisms related to the encoding of acoustic information and the identification of potential biomarkers, and their relationship with language and cognition [3,4,18,31-34]. The ABR to different stimuli informs the biological mechanisms that subserve auditory processing. Therefore, considering the importance of deficits in auditory information processing, when considering the etiology and diagnosis of stuttering, ABRs were measured to assess the integrity of neurophysiological responses to both click and speech stimuli in children who stutter. Our results showed that the processing of speech stimuli differ between typically developing children and children who stutter. The click-evoked ABR is an electrophysiological test, commonly used in clinical and scientific settings. Since ABR’s first description in the early 70’s, a great amount of research was carried out using this measure. Concerning click-evoked ABR in stutterers, controversial results have been reported in literature. Khedr et al. [35] recorded the visual and auditory evoked potentials in stutterers and their nonstutterers peers aged between six and 25 years. The authors found significantly longer latencies (waves I, III and V and I-III and I-V interpeaks) in the group of stutterers, suggesting that stuttering may be associated with peripheral and central auditory abnormalities. Similarly, Blood and Blood [5] reported abnormal I-V interpeak latency values in adults who stutter. Otherwise, some authors have reported normal results on click-evoked ABR in children [7] and adults who stutter [13], whose findings were corroborated by the present study (Table 3). According to Khedr et al. [35], this discrepancy in the data obtained by different authors may be partially explained by methodological differences, such as subjects’ age, onset and duration of their fluency disorder. Currently, special attention has been given to the study of the auditory brainstem’s response to complex sounds. According to Kraus [36], this auditory evoked potential provides a wealth of information unobtainable from click- or tone-evoked ABR, and its increasing use results from the existence of a transparent mapping that connects the evoking stimulus and the response. It also provides information about the efferent auditory system, and the data can be easily and reliably obtained in individuals. Although the speech-evoked ABR has already been recorded in different clinical populations, such as children with dyslexia, specific language impairment, and autistic spectrum disorder [18,31,37,38], there are no scientific reports that characterize such responses in children and adults who stutter. The analysis of the speechevoked ABR results showed significant differences between groups for the VA complex amplitude and the latency values of wave C. Furthermore, the analysis of variance indicated that there was significantly greater variability of results in the group of CWS concerning the latency of wave C and amplitude and latency values of the VA complex (Table 5). In the current study, the presence of differences between CWS and CWNS for speech-evoked ABR and the absence of differences between those groups for click-evoked ABR reinforce the hypothesis that specific neuronal populations appear to be involved in the processing of speech sounds [39,40]. According to Song et al. [41], differences in the encoding of these stimuli may occur due to differences in their acoustic structures. Within this context, the authors suggest that abnormalities in the neural encoding of acoustic stimuli can occur due to a broader ‘problem’ of the central auditory system, not detected by procedures such as audiometry or click-evoked ABR. Significantly higher VA amplitude values were observed in the group of children who stutter. Russo et al. [16] found that this measure showed poor stability when recorded in different sessions, indicating that this might not be the most appropriate parameter for characterizing the acoustic stimuli encoding. Additionally, according to Wible et al. [42], such results could reflect neural synchrony differences between groups.

Our results also indicated the presence of differences between groups regarding latency values of the components of the FFR (wave C), and no differences in the latency values of the onset response components (waves V and A), corroborating the hypothesis proposed by Kraus and Nicol [40], which stated that the onset response and the FFR represent distinct blocks that are encoded separately. King et al. [3] and Johnson et al. [43] also found delays in FFR in children with learning problems. The authors reported that their findings corroborate the model proposed by Johnson et al. [33] and Kraus and Nicol [40], which states that waves A, C and O are generated by neural mechanisms that reflect the transient characteristics associated with the filter characteristics of the speech signal, while the waves D, E and F are generated by neural mechanisms that are related to the information of the sound source, such as the fundamental frequency. Additionally, Marler and Champlin [44] hypothesized that factors such as synchrony abnormalities, activation of alternative pathways, increased inhibitory mechanisms, or the combination of these factors could explain the differences in speech-evoked ABR results. The differences between the results of CWS and CWNS could also lie in the fact that they may differ with respect to certain characteristics of temporal processing [45]. According to Wible et al. [42], the acoustic structure of speech is characterized by the sudden spectral pattern change. Therefore, differences in processing, perception and discrimination of complex sounds might interfere with certain speech and language skills. Specific correlations between the ABR and stuttering, and between ABR and stuttering severity levels were not found in the present study. The statistical relevance of the data was probably affected by the small sample size. Thus, further studies focused in speech-evoked ABR in stutterers with different severity levels are needed in a larger sample size in order to improve data reliability.

Several studies have highlighted the existence of intrinsic relations between the abnormal processing of acoustic information in the brainstem and cortex. Wible et al. [42] reported that abnormalities at lower levels of the auditory pathway can limit the effectiveness of certain acoustic information processing in cortical level. Furthermore, some studies [32,46] reinforced the idea that brainstem timing deficits may affect the cortical processing of acoustic information. In this study we found brainstem timing deficits in the CWS group. Thus, considering the interference of brainstem responses on the cortical processing of acoustic information, it would be of great value to investigate the presence of deficits in cortical processing of acoustic information, through long latency responses.

Furthermore, given that some studies indicate the existence of neuroanatomical differences regarding the integrity of the cerebral white matter in children with persistent stuttering and those who have recovered naturally from stuttering [2], the AEP recording in these two groups might allow the identification of possible differences related to the encoding of auditory information. Additionally, considering the differences between children and adults who stutter related to the asymmetry pattern of the cerebral hemispheres [1,2,47,48], we emphasize the importance of characterizing these responses in different age groups, since some degree of compensation may occur in adults who stutter due to connectivity failures at the left hemisphere. Finally, future research using different AEP are needed to support the study of speech processing at different levels of the central auditory nervous system in stutterers, which will provide relevant and objective information about possible “subclinical” abnormalities related to the speech perception and processing in these individuals. It will also allow a better assessment of the benefits and limitations of using stimuli with different complexities in the electrophysiological evaluation of people who stutter.

References

Citation: Gonçalves IC, De Andrade CRF, Matas CG (2015) Auditory Processing of Speech and Non-Speech Stimuli in Children who Stutter: Electrophysiological Evidences. Brain Disord Ther 4:199.

Copyright: © 2015 Gonçalves IC, et al. This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.