Browsing by Author "Mao, Darren"
Now showing 1 - 6 of 6
Results Per Page
Sort Options
- ItemAuditory Brainstem Representation of the Voice Pitch Contours in the Resolved and Unresolved Components of Mandarin Tones(Frontiers in Neuroscience, 2018-12) Peng, Fei; McKay, Colette; Mao, Darren; Hou, Wensheng; Innes-Brown, HamishAccurate perception of voice pitch plays a vital role in speech understanding, especially for tonal languages such as Mandarin. Lexical tones are primarily distinguished by the fundamental frequency (F0) contour of the acoustic waveform. It has been shown that the auditory system could extract the F0 from the resolved and unresolved harmonics, and the tone identification performance of resolved harmonics was better than unresolved harmonics. To evaluate the neural response to the resolved and unresolved components of Mandarin tones in quiet and in speech-shaped noise, we recorded the frequency-following response. In this study, four types of stimuli were used: speech with either only-resolved harmonics or only-unresolved harmonics, both in quiet and in speech-shaped noise. Frequency-following responses (FFRs) were recorded to alternating-polarity stimuli and were added or subtracted to enhance the neural response to the envelope (FFRENV) or fine structure (FFRTFS), respectively. The neural representation of the F0 strength reflected by the FFRENV was evaluated by the peak autocorrelation value in the temporal domain and the peak phase-locking value (PLV) at F0 in the spectral domain. Both evaluation methods showed that the FFRENV F0 strength in quiet was significantly stronger than in noise for speech including unresolved harmonics, but not for speech including resolved harmonics. The neural representation of the temporal fine structure reflected by the FFRTFS was assessed by the PLV at the harmonic near to F1 (4th of F0). The PLV at harmonic near to F1 (4th of F0) of FFRTFS to resolved harmonics was significantly larger than to unresolved harmonics. Spearman’s correlation showed that the FFRENV F0 strength to unresolved harmonics was correlated with tone identification performance in noise (0 dB SNR). These results showed that the FFRENV F0 strength to speech sounds with resolved harmonics was not affected by noise. In contrast, the response to speech sounds with unresolved harmonics, which were significantly smaller in noise compared to quiet. Our results suggest that coding resolved harmonics was more important than coding envelope for tone identification performance in noise.
- ItemCortical auditory evoked potential time-frequency growth functions for fully objective hearing threshold estimation(Elsevier, Inc., 2018-12) Mao, Darren; Innes-Brown, Hamish; Petoe, Matthew; Wong, Yan; McKay, ColetteCortical auditory evoked potential (CAEPs) thresholds have been shown to correlate well with behaviourally determined hearing thresholds. Growth functions of CAEPs show promise as an alternative to single level detection for objective hearing threshold estimation; however, the accuracy and clinical relevance of this method is not well examined. In this study, we used temporal and spectral CAEP features to generate feature growth functions. Spectral features may be more robust than traditional peak-picking methods where CAEP morphology is variable, such as in children or hearing device users. Behavioural hearing thresholds were obtained and CAEPs were recorded in response to a 1 kHz puretone from twenty adults with no hearing loss. Four features, peak-to-peak amplitude, root-mean-square, peak spectral power and peak phase-locking value (PLV) were extracted from the CAEPs. Functions relating each feature with stimulus level were used to calculate objective hearing threshold estimates. We assessed the performance of each feature by calculating the difference between the objective estimate and the behaviourally-determined threshold. We compared the accuracy of the estimates using each feature and found that the peak PLV feature performed best, with a mean threshold error of 2.7 dB and standard deviation of 5.9 dB across subjects from behavioural threshold. We also examined the relation between recording time, data quality and threshold estimate errors, and found that on average for a single threshold, 12.7 minutes of recording was needed for a 95% confidence that the threshold estimate was within 20 dB of the behavioural threshold, using the peak-to-peak amplitude feature, while 14 minutes is needed for the peak PLV feature. These results show that the PLV of CAEPs can be used to find a clinically relevant hearing threshold estimate. Its potential stability in differing morphology may be an advantage in testing infants or cochlear implant users.
- ItemFully objective hearing threshold estimation in cochlear implant users using phase-locking value growth functions(Elsevier B.V., 2019-03) Mao, Darren; Innes-Brown, Hamish; Petoe, Matthew; Wong, Yan; McKay, ColetteCochlear implant users require fitting of electrical threshold and comfort levels for optimal access to sound. In this study, we used single-channel cortical auditory evoked responses (CAEPs) obtained from 20 participants using a Nucleus device. A fully objective method to estimate threshold levels was developed, using growth function fitting and the peak phase-locking value feature. Results demonstrated that growth function fitting is a viable method for estimating threshold levels in cochlear implant users, with a strong correlation (r=0.979, p<0.001) with behavioral thresholds. Additionally, we compared the threshold estimates using CAEPs acquired from a standard montage (Cz to mastoid) against using a montage of recording channels near the cochlear implant, simulating recording from the device itself. The correlation between estimated and behavioural thresholds remained strong (r=0.966, p<0.001), however the recording time needed to be increased to produce a similar estimate accuracy. Finally, a method for estimating comfort levels was investigated, and showed that the comfort level estimates were mildly correlated with behavioral comfort levels (r=0.50, p=0.024).
- ItemLanguage networks of normal-hearing infants exhibit topological differences between resting and steady states: An fNIRS functional connectivity study.(Hum Brain Mapp, 2024-09) Paranawithana, Ishara; Mao, Darren; McKay, Colette M; Wong, Yan TTask-related studies have consistently reported that listening to speech sounds activate the temporal and prefrontal regions of the brain. However, it is not well understood how functional organization of auditory and language networks differ when processing speech sounds from its resting state form. The knowledge of language network organization in typically developing infants could serve as an important biomarker to understand network-level disruptions expected in infants with hearing impairment. We hypothesized that topological differences of language networks can be characterized using functional connectivity measures in two experimental conditions (1) complete silence (resting) and (2) in response to repetitive continuous speech sounds (steady). Thirty normal-hearing infants (14 males and 16 females, age: 7.8 ± 4.8 months) were recruited in this study. Brain activity was recorded from bilateral temporal and prefrontal regions associated with speech and language processing for two experimental conditions: resting and steady states. Topological differences of functional language networks were characterized using graph theoretical analysis. The normalized global efficiency and clustering coefficient were used as measures of functional integration and segregation, respectively. We found that overall, language networks of infants demonstrate the economic small-world organization in both resting and steady states. Moreover, language networks exhibited significantly higher functional integration and significantly lower functional segregation in resting state compared to steady state. A secondary analysis that investigated developmental effects of infants aged 6-months or below and above 6-months revealed that such topological differences in functional integration and segregation across resting and steady states can be reliably detected after the first 6-months of life. The higher functional integration observed in resting state suggests that language networks of infants facilitate more efficient parallel information processing across distributed language regions in the absence of speech stimuli. Moreover, higher functional segregation in steady state indicates that the speech information processing occurs within densely interconnected specialized regions in the language network.
- ItemResting-State Functional Connectivity Predicts Cochlear-Implant Speech Outcomes.(Ear & Hearing, 2024-07-16) Esmaelpoor, Jamal; Peng, Tommy; Jelfs, Beth; Mao, Darren; Shader, Maureen J; McKay, Colette MCochlear implants (CIs) have revolutionized hearing restoration for individuals with severe or profound hearing loss. However, a substantial and unexplained variability persists in CI outcomes, even when considering subject-specific factors such as age and the duration of deafness. In a pioneering study, we use resting-state functional near-infrared spectroscopy to predict speech-understanding outcomes before and after CI implantation. Our hypothesis centers on resting-state functional connectivity (FC) reflecting brain plasticity post-hearing loss and implantation, specifically targeting the average clustering coefficient in resting FC networks to capture variation among CI users.
- ItemTwo Independent Response Mechanisms to Auditory Stimuli Measured with Functional Near-Infrared Spectroscopy in Sleeping Infants.(Trend in Hearing, 2024-07-25) Lee, Onn Wah; Mao, Darren; Wunderlich, Julia; Balasubramanian, Gautam; Haneman, Mica; Korneev, Mikhail; McKay, Colette MThis study investigated the morphology of the functional near-infrared spectroscopy (fNIRS) response to speech sounds measured from 16 sleeping infants and how it changes with repeated stimulus presentation. We observed a positive peak followed by a wide negative trough, with the latter being most evident in early epochs. We argue that the overall response morphology captures the effects of two simultaneous, but independent, response mechanisms that are both activated at the stimulus onset: one being the obligatory response to a sound stimulus by the auditory system, and the other being a neural suppression effect induced by the arousal system. Because the two effects behave differently with repeated epochs, it is possible to mathematically separate them and use fNIRS to study factors that affect the development and activation of the arousal system in infants. The results also imply that standard fNIRS analysis techniques need to be adjusted to take into account the possibilities of multiple simultaneous brain systems being activated and that the response to a stimulus is not necessarily stationary.