Browsing by Author "Innes-Brown, Hamish"
Now showing 1 - 20 of 23
Results Per Page
Sort Options
- ItemThe acoustic and perceptual cues affecting melody segregation for listeners with a cochlear implant(Frontiers, 2013-10) Marozeau, Jeremy; Innes-Brown, Hamish; Blamey, PeterOur ability to listen selectively to single sound sources in complex auditory environments is termed “auditory stream segregation.” This ability is affected by peripheral disorders such as hearing loss, as well as plasticity in central processing such as occurs with musical training. Brain plasticity induced by musical training can enhance the ability to segregate sound, leading to improvements in a variety of auditory abilities. The melody segregation ability of 12 cochlear-implant recipients was tested using a new method to determine the perceptual distance needed to segregate a simple 4-note melody from a background of interleaved random-pitch distractor notes. In experiment 1, participants rated the difficulty of segregating the melody from distracter notes. Four physical properties of the distracter notes were changed. In experiment 2, listeners were asked to rate the dissimilarity between melody patterns whose notes differed on the four physical properties simultaneously. Multidimensional scaling analysis transformed the dissimilarity ratings into perceptual distances. Regression between physical and perceptual cues then derived the minimal perceptual distance needed to segregate the melody. The most efficient streaming cue for CI users was loudness. For the normal hearing listeners without musical backgrounds, a greater difference on the perceptual dimension correlated to the temporal envelope is needed for stream segregation in CI users. No differences in streaming efficiency were found between the perceptual dimensions linked to the F0 and the spectral envelope. Combined with our previous results in normally-hearing musicians and non-musicians, the results show that differences in training as well as differences in peripheral auditory processing (hearing impairment and the use of a hearing device) influences the way that listeners use different acoustic cues for segregating interleaved musical streams.
- ItemAssessing hearing by measuring heartbeat: The effect of sound level(PLoS One, 2019-03) Shoushtarian, Mehrnaz; Weder, Stefan; Innes-Brown, Hamish; McKay, ColetteFunctional near-infrared spectroscopy (fNIRS) is a non-invasive brain imaging technique that measures changes in oxygenated and de-oxygenated hemoglobin concentration and can provide a measure of brain activity. In addition to neural activity, fNIRS signals contain components that can be used to extract physiological information such as cardiac measures. Previous studies have shown changes in cardiac activity in response to different sounds. This study investigated whether cardiac responses collected using fNIRS differ for different loudness of sounds. fNIRS data were collected from 28 normal hearing participants. Cardiac response measures evoked by broadband, amplitude-modulated sounds were extracted for four sound intensities ranging from near-threshold to comfortably loud levels (15, 40, 65 and 90 dB Sound Pressure Level (SPL)). Following onset of the noise stimulus, heart rate initially decreased for sounds of 15 and 40 dB SPL, reaching a significantly lower rate at 15 dB SPL. For sounds at 65 and 90 dB SPL, increases in heart rate were seen. To quantify the timing of significant changes, inter-beat intervals were assessed. For sounds at 40 dB SPL, an immediate significant change in the first two inter-beat intervals following sound onset was found. At other levels, the most significant change appeared later (beats 3 to 5 following sound onset). In conclusion, changes in heart rate were associated with the level of sound with a clear difference in response to near-threshold sounds compared to comfortably loud sounds. These findings may be used alone or in conjunction with other measures such as fNIRS brain activity for evaluation of hearing ability.
- ItemAudio-visual integration in cochlear implant listeners and the effect of age difference(Acoustical Society of America, 2019-12) Zhou, Xin; Innes-Brown, Hamish; McKay, ColetteThis study aimed to investigate differences in audio-visual (AV) integration between cochlear implant (CI) listeners and normal-hearing (NH) adults. A secondary aim was to investigate the effect of age differences by examining AV integration in groups of older and younger NH adults. Seventeen CI listeners, 13 similarly aged NH adults, and 16 younger NH adults were recruited. Two speech identification experiments were conducted to evaluate AV integration of speech cues. In the first experiment, reaction times in audio-alone (A-alone), visual-alone (V-alone), and AV conditions were measured during a speeded task in which participants were asked to identify a target sound /aSa/ among 11 alternatives. A race model was applied to evaluate AV integration. In the second experiment, identification accuracies were measured using a closed set of consonants and an open set of consonant-nucleus-consonant words. The authors quantified AV integration using a combination of a probability model and a cue integration model (which model participants' AV accuracy by assuming no or optimal integration, respectively). The results found that experienced CI listeners showed no better AV integration than their similarly aged NH adults. Further, there was no significant difference in AV integration between the younger and older NH adults
- ItemAudiovisual integration in noise by children and adults(Elsevier,, 2010) Barutchu, Ayla; Danaher, Jaclyn; Crewther, Sheila; Innes-Brown, Hamish; Shivdasani, Mohit; Paolini, AntonioThe aim of this study was to investigate the development of multisensory facilitation in primary school age children under conditions of auditory noise. Motor reaction times and accuracy were recorded from 8-year-olds, 10-year-olds and adults during an auditory, a visual, and an audiovisual detection task. Auditory signal-to-noise ratios (SNRs) of 30, 22, 12 and 9 dB across the different age groups were compared. Multisensory facilitation was greater in adults than in children, though performance for all age-groups was affected by the presence of background noise. It is posited that changes in multisensory facilitation with increased auditory noise may be due to changes in attention bias.
- ItemAuditory Brainstem Representation of the Voice Pitch Contours in the Resolved and Unresolved Components of Mandarin Tones(Frontiers in Neuroscience, 2018-12) Peng, Fei; McKay, Colette; Mao, Darren; Hou, Wensheng; Innes-Brown, HamishAccurate perception of voice pitch plays a vital role in speech understanding, especially for tonal languages such as Mandarin. Lexical tones are primarily distinguished by the fundamental frequency (F0) contour of the acoustic waveform. It has been shown that the auditory system could extract the F0 from the resolved and unresolved harmonics, and the tone identification performance of resolved harmonics was better than unresolved harmonics. To evaluate the neural response to the resolved and unresolved components of Mandarin tones in quiet and in speech-shaped noise, we recorded the frequency-following response. In this study, four types of stimuli were used: speech with either only-resolved harmonics or only-unresolved harmonics, both in quiet and in speech-shaped noise. Frequency-following responses (FFRs) were recorded to alternating-polarity stimuli and were added or subtracted to enhance the neural response to the envelope (FFRENV) or fine structure (FFRTFS), respectively. The neural representation of the F0 strength reflected by the FFRENV was evaluated by the peak autocorrelation value in the temporal domain and the peak phase-locking value (PLV) at F0 in the spectral domain. Both evaluation methods showed that the FFRENV F0 strength in quiet was significantly stronger than in noise for speech including unresolved harmonics, but not for speech including resolved harmonics. The neural representation of the temporal fine structure reflected by the FFRTFS was assessed by the PLV at the harmonic near to F1 (4th of F0). The PLV at harmonic near to F1 (4th of F0) of FFRTFS to resolved harmonics was significantly larger than to unresolved harmonics. Spearman’s correlation showed that the FFRENV F0 strength to unresolved harmonics was correlated with tone identification performance in noise (0 dB SNR). These results showed that the FFRENV F0 strength to speech sounds with resolved harmonics was not affected by noise. In contrast, the response to speech sounds with unresolved harmonics, which were significantly smaller in noise compared to quiet. Our results suggest that coding resolved harmonics was more important than coding envelope for tone identification performance in noise.
- ItemAuditory Stream Segregation and Selective Attention for Cochlear Implant Listeners: Evidence From Behavioral Measures and Event-Related Potentials(Frontiers in Neuroscience, 2018-08) Paredes-Gallardo, Andreu; Innes-Brown, Hamish; Madsen, Sara; Dau, Torsten; Marozeau, JeremyThe role of the spatial separation between the stimulating electrodes (electrode separation) in sequential stream segregation was explored in cochlear implant (CI) listeners using a deviant detection task. Twelve CI listeners were instructed to attend to a series of target sounds in the presence of interleaved distractor sounds. A deviant was randomly introduced in the target stream either at the beginning, middle or end of each trial. The listeners were asked to detect sequences that contained a deviant and to report its location within the trial. The perceptual segregation of the streams should, therefore, improve deviant detection performance. The electrode range for the distractor sounds was varied, resulting in different amounts of overlap between the target and the distractor streams. For the largest electrode separation condition, event-related potentials (ERPs) were recorded under active and passive listening conditions. The listeners were asked to perform the behavioral task for the active listening condition and encouraged to watch a muted movie for the passive listening condition. Deviant detection performance improved by increasing electrode separation between the streams, suggesting that larger electrode differences facilitate the segregation of the streams. Deviant detection performance was best for deviants happening late in the sequence, indicating that a segregated percept builds up over time. The analysis of the ERP waveforms revealed that auditory selective attention modulates the ERP responses in CI listeners. Specifically, the responses to the target stream were, overall, larger in the active relative to the passive listening condition. Conversely, the ERP responses to the distractor stream were not affected by selective attention. However, no significant correlation was observed between the behavioral performance and the amount of attentional modulation. Overall, the findings from the present study suggest that CI listeners can use electrode separation to perceptually group sequential sounds. Moreover, selective attention can be deployed on the basis of the resulting auditory objects, as reflected by the attentional modulation of the ERPs at the group level.
- ItemCortical auditory evoked potential time-frequency growth functions for fully objective hearing threshold estimation(Elsevier, Inc., 2018-12) Mao, Darren; Innes-Brown, Hamish; Petoe, Matthew; Wong, Yan; McKay, ColetteCortical auditory evoked potential (CAEPs) thresholds have been shown to correlate well with behaviourally determined hearing thresholds. Growth functions of CAEPs show promise as an alternative to single level detection for objective hearing threshold estimation; however, the accuracy and clinical relevance of this method is not well examined. In this study, we used temporal and spectral CAEP features to generate feature growth functions. Spectral features may be more robust than traditional peak-picking methods where CAEP morphology is variable, such as in children or hearing device users. Behavioural hearing thresholds were obtained and CAEPs were recorded in response to a 1 kHz puretone from twenty adults with no hearing loss. Four features, peak-to-peak amplitude, root-mean-square, peak spectral power and peak phase-locking value (PLV) were extracted from the CAEPs. Functions relating each feature with stimulus level were used to calculate objective hearing threshold estimates. We assessed the performance of each feature by calculating the difference between the objective estimate and the behaviourally-determined threshold. We compared the accuracy of the estimates using each feature and found that the peak PLV feature performed best, with a mean threshold error of 2.7 dB and standard deviation of 5.9 dB across subjects from behavioural threshold. We also examined the relation between recording time, data quality and threshold estimate errors, and found that on average for a single threshold, 12.7 minutes of recording was needed for a 95% confidence that the threshold estimate was within 20 dB of the behavioural threshold, using the peak-to-peak amplitude feature, while 14 minutes is needed for the peak PLV feature. These results show that the PLV of CAEPs can be used to find a clinically relevant hearing threshold estimate. Its potential stability in differing morphology may be an advantage in testing infants or cochlear implant users.
- ItemCortical auditory evoked potentials as an objective measure of behavioral thresholds in cochlear implant users(Elsevier B.V, 2015-09) Visram, Anisa; Innes-Brown, Hamish; El-deredy, Wael; McKay, ColetteThe aim of this study was to assess the suitability of using cortical auditory evoked potentials (CAEPs) as an objective tool for predicting behavioral hearing thresholds in cochlear implant (CI) users. Nine experienced adult CI users of Cochlear™ devices participated. Behavioral thresholds were measured in CI users across apical, mid and basal electrodes. CAEPs were measured for the same stimuli (50 ms pulse trains of 900-pps rate) at a range of input levels across the individual's psychophysical dynamic range (DR). Amplitude growth functions using global field power (GFP) were plotted, and from this the CAEP thresholds were extrapolated and compared to the behavioral thresholds. Increased amplitude and decreased latency of the N1–P2 response was seen with increasing input level. A strong correlation was found between CAEP and behavioral thresholds (r = 0.93), implying that the cortical response may be more useful as an objective programming tool for cochlear implants than the auditory nerve response.
- ItemCortical fNIRS Responses Can Be Better Explained by Loudness Percept than Sound Intensity(Wolters Kluwer Health, Inc, 2020-01) Weder, Stefan; Shoushtarian, Mehrnaz; Olivares, Virginia; Zhou, Xin; Innes-Brown, Hamish; McKay, ColetteOBJECTIVES: Functional near-infrared spectroscopy (fNIRS) is a brain imaging technique particularly suitable for hearing studies. However, the nature of fNIRS responses to auditory stimuli presented at different stimulus intensities is not well understood. In this study, we investigated whether fNIRS response amplitude was better predicted by stimulus properties (intensity) or individually perceived attributes (loudness). DESIGN: Twenty-two young adults were included in this experimental study. Four different stimulus intensities of a broadband noise were used as stimuli. First, loudness estimates for each stimulus intensity were measured for each participant. Then, the 4 stimulation intensities were presented in counterbalanced order while recording hemoglobin saturation changes from cortical auditory brain areas. The fNIRS response was analyzed in a general linear model design, using 3 different regressors: a non-modulated, an intensity-modulated, and a loudness-modulated regressor. RESULTS: Higher intensity stimuli resulted in higher amplitude fNIRS responses. The relationship between stimulus intensity and fNIRS response amplitude was better explained using a regressor based on individually estimated loudness estimates compared with a regressor modulated by stimulus intensity alone. CONCLUSIONS: Brain activation in response to different stimulus intensities is more reliant upon individual loudness sensation than physical stimulus properties. Therefore, in measurements using different auditory stimulus intensities or subjective hearing parameters, loudness estimates should be examined when interpreting results.
- ItemCortical Speech Processing in Postlingually Deaf Adult Cochlear Implant Users, as Revealed by Functional Near-Infrared Spectroscopy(SAGE, 2018-07) Zhou, Xin; Seghouane, Abd-Krim; Shah, Adnan; Innes-Brown, Hamish; Cross, Will; Litovsky, Ruth; McKay, ColetteAn experiment was conducted to investigate the feasibility of using functional near-infrared spectroscopy (fNIRS) to image cortical activity in the language areas of cochlear implant (CI) users and to explore the association between the activity and their speech understanding ability. Using fNIRS, 15 experienced CI users and 14 normal-hearing participants were imaged while presented with either visual speech or auditory speech. Brain activation was measured from the prefrontal, temporal, and parietal lobe in both hemispheres, including the language-associated regions. In response to visual speech, the activation levels of CI users in an a priori region of interest (ROI)—the left superior temporal gyrus or sulcus—were negatively correlated with auditory speech understanding. This result suggests that increased cross-modal activity in the auditory cortex is predictive of poor auditory speech understanding. In another two ROIs, in which CI users showed significantly different mean activation levels in response to auditory speech compared with normal-hearing listeners, activation levels were significantly negatively correlated with CI users’ auditory speech understanding. These ROIs were located in the right anterior temporal lobe (including a portion of prefrontal lobe) and the left middle superior temporal lobe. In conclusion, fNIRS successfully revealed activation patterns in CI users associated with their auditory speech understanding.
- ItemDichotic Listening Can Improve Perceived Clarity of Music in Cochlear Implant Users(SAGE, 2015-09) Vannson, Nicolas; Innes-Brown, Hamish; Marozeau, JeremyMusical enjoyment for cochlear implant (CI) recipients is often reported to be unsatisfactory. Our goal was to determine whether the musical experience of postlingually deafened adult CI recipients could be enriched by presenting the bass and treble clef parts of short polyphonic piano pieces separately to each ear (dichotic). Dichotic presentation should artificially enhance the lateralization cues of each part and help the listeners to better segregate them and thus provide greater clarity. We also hypothesized that perception of the intended emotion of the pieces and their overall enjoyment would be enhanced in the dichotic mode compared with the monophonic (both parts in the same ear) and the diotic mode (both parts in both ears). Twenty-eight piano pieces specifically composed to induce sad or happy emotions were selected. The tempo of the pieces, which ranged from lento to presto covaried with the intended emotion (from sad to happy). Thirty participants (11 normal-hearing listeners, 11 bimodal CI and hearing-aid users, and 8 bilaterally implanted CI users) participated in this study. Participants were asked to rate the perceived clarity, the intended emotion, and their preference of each piece in different listening modes. Results indicated that dichotic presentation produced small significant improvements in subjective ratings based on perceived clarity and preference. We also found that preference and clarity ratings were significantly higher for pieces with fast tempi compared with slow tempi. However, no significant differences between diotic and dichotic presentation were found for the participants' preference ratings, or their judgments of intended emotion.
- ItemThe effect of timbre and loudness on melody segregation(the Regents of the University of California., 2013-02) Marozeau, Jeremy; Innes-Brown, Hamish; Blamey, PeterTHE AIM OF THIS STUDY WAS TO EXAMINE THE effects of three acoustic parameters on the difficulty of segregating a simple 4-note melody from a background of interleaved distractor notes. Melody segregation difficulty ratings were recorded while three acoustic para- meters of the distractor notes were varied separately: intensity, temporal envelope, and spectral envelope. Statistical analyses revealed a significant effect of music training on difficulty rating judgments. For participants with music training, loudness was the most efficient perceptual cue, and no difference was found between the dimensions of timbre influenced by temporal and spectral envelope. For the group of listeners with less music training, both loudness and spectral envelope were the most efficient cues. We speculate that the dif- ference between musicians and nonmusicians may be due to differences in processing the stimuli: musicians may process harmonic sound sequences using brain networks specialized for music, whereas nonmusicians may use speech networks.
- ItemThe effect of visual cues on difficulty ratings for segregation of musical streams in listeners with impaired hearing(PLOS, 2011-12-15) Innes-Brown, Hamish; Marozeau, Jeremy; Blamey, PeterBackground: Enjoyment of music is an important part of life that may be degraded for people with hearing impairments, especially those using cochlear implants. The ability to follow separate lines of melody is an important factor in music appreciation. This ability relies on effective auditory streaming, which is much reduced in people with hearing impairment, contributing to difficulties in music appreciation. The aim of this study was to assess whether visual cues could reduce the subjective difficulty of segregating a melody from interleaved background notes in normally hearing listeners, those using hearing aids, and those using cochlear implants. Methodology/Principal Findings: Normally hearing listeners (N = 20), hearing aid users (N = 10), and cochlear implant users (N = 11) were asked to rate the difficulty of segregating a repeating four-note melody from random interleaved distracter notes. The pitch of the background notes was gradually increased or decreased throughout blocks, providing a range of difficulty from easy (with a large pitch separation between melody and distracter) to impossible (with the melody and distracter completely overlapping). Visual cues were provided on half the blocks, and difficulty ratings for blocks with and without visual cues were compared between groups. Visual cues reduced the subjective difficulty of extracting the melody from the distracter notes for normally hearing listeners and cochlear implant users, but not hearing aid users. Conclusion/Significance: Simple visual cues may improve the ability of cochlear implant users to segregate lines of music, thus potentially increasing their enjoyment of music. More research is needed to determine what type of acoustic cues to encode visually in order to optimise the benefits they may provide.
- ItemEffects of Stimulus Duration on Event-Related Potentials Recorded From Cochlear-Implant Users(Wolters Kluwer Health, 2017-05) Presacco, Alessandro; Innes-Brown, Hamish; Goupell, Matthew; Anderson, SamiraOBJECTIVES: Several studies have investigated the feasibility of using electrophysiology as an objective tool to efficiently map cochlear implants. A pervasive problem when measuring event-related potentials is the need to remove the direct-current (DC) artifact produced by the cochlear implant. Here, we describe how DC artifact removal can corrupt the response waveform and how the appropriate choice of stimulus duration may minimize this corruption. DESIGN: Event-related potentials were recorded to a synthesized vowel /a/ with a 170- or 400-ms duration. RESULTS: The P2 response, which occurs between 150 and 250 ms, was corrupted by the DC artifact removal algorithm for a 170-ms stimulus duration but was relatively uncorrupted for a 400-ms stimulus duration. CONCLUSIONS: To avoid response waveform corruption from DC artifact removal, one should choose a stimulus duration such that the offset of the stimulus does not temporally coincide with the specific peak of interest. While our data have been analyzed with only one specific algorithm, we argue that the length of the stimulus may be a critical factor for any DC artifact removal algorithm.
- ItemEvidence for enhanced multisensory facilitation with stimulus relevance: An electrophysiological investigation(PLOS, 2013-01-23) Baratchu, Ayla; Freestone, Dean; Innes-Brown, Hamish; Crewther, David; Crewther, SheilaCurrently debate exists relating to the interplay between multisensory processes and bottom-up and top-down influences. However, few studies have looked at neural responses to newly paired audiovisual stimuli that differ in their prescribed relevance. For such newly associated audiovisual stimuli, optimal facilitation of motor actions was observed only when both components of the audiovisual stimuli were targets. Relevant auditory stimuli were found to significantly increase the amplitudes of the event-related potentials at the occipital pole during the first 100 ms post-stimulus onset, though this early integration was not predictive of multisensory facilitation. Activity related to multisensory behavioral facilitation was observed approximately 166 ms post-stimulus, at left central and occipital sites. Furthermore, optimal multisensory facilitation was found to be associated with a latency shift of induced oscillations in the beta range (14–30 Hz) at right hemisphere parietal scalp regions. These findings demonstrate the importance of stimulus relevance to multisensory processing by providing the first evidence that the neural processes underlying multisensory integration are modulated by the relevance of the stimuli being combined. We also provide evidence that such facilitation may be mediated by changes in neural synchronization in occipital and centro-parietal neural populations at early and late stages of neural processing that coincided with stimulus selection, and the preparation and initiation of motor action.
- ItemFully objective hearing threshold estimation in cochlear implant users using phase-locking value growth functions(Elsevier B.V., 2019-03) Mao, Darren; Innes-Brown, Hamish; Petoe, Matthew; Wong, Yan; McKay, ColetteCochlear implant users require fitting of electrical threshold and comfort levels for optimal access to sound. In this study, we used single-channel cortical auditory evoked responses (CAEPs) obtained from 20 participants using a Nucleus device. A fully objective method to estimate threshold levels was developed, using growth function fitting and the peak phase-locking value feature. Results demonstrated that growth function fitting is a viable method for estimating threshold levels in cochlear implant users, with a strong correlation (r=0.979, p<0.001) with behavioral thresholds. Additionally, we compared the threshold estimates using CAEPs acquired from a standard montage (Cz to mastoid) against using a montage of recording channels near the cochlear implant, simulating recording from the device itself. The correlation between estimated and behavioural thresholds remained strong (r=0.966, p<0.001), however the recording time needed to be increased to produce a similar estimate accuracy. Finally, a method for estimating comfort levels was investigated, and showed that the comfort level estimates were mildly correlated with behavioral comfort levels (r=0.50, p=0.024).
- ItemHearing Aid Use in Older Adults With Postlingual Sensorineural Hearing Loss: Protocol for a Prospective Cohort Study(JMIR Publications, 2018-10) Hughes, Matthew; Nkyekyer, Joanna; Innes-Brown, Hamish; Rossell, Susan; Sly, David; Bhar, Sunil; Pipingas, Andrew; Hennessy, Alison; Meyer, DennyBACKGROUND: Older adults with postlingual sensorineural hearing loss (SNHL) exhibit a poor prognosis that not only includes impaired auditory function but also rapid cognitive decline, especially speech-related cognition, in addition to psychosocial dysfunction and an increased risk of dementia. Consistent with this prognosis, individuals with SNHL exhibit global atrophic brain alteration as well as altered neural function and regional brain organization within the cortical substrates that underlie auditory and speech processing. Recent evidence suggests that the use of hearing aids might ameliorate this prognosis. OBJECTIVE: The objective was to study the effects of a hearing aid use intervention on neurocognitive and psychosocial functioning in individuals with SNHL aged >/=55 years. METHODS: All aspects of this study will be conducted at Swinburne University of Technology (Hawthorn, Victoria, Australia). We will recruit 2 groups (n=30 per group) of individuals with mild to moderate SNHL from both the community and audiology health clinics (Alison Hennessy Audiology, Chelsea Hearing Pty Ltd). These groups will include individuals who have worn a hearing aid for, at least, 12 months or never worn a hearing aid. All participants would be asked to complete, at 2 time points (t) including baseline (t=0) and follow-up (t=6 months), tests of hearing and psychosocial and cognitive function and attend a magnetic resonance imaging (MRI) session. The MRI session will include both structural and functional MRI (sMRI and fMRI) scans, the latter involving the performance of a novel speech processing task. RESULTS: This research is funded by the Barbara Dicker Brain Sciences Foundation Grants, the Australian Research Council, Alison Hennessy Audiology, and Chelsea Hearing Pty Ltd under the Industry Transformation Training Centre Scheme (ARC Project #IC140100023). We obtained the ethics approval on November 18, 2017 (Swinburne University Human Research Ethics Committee protocol number SHR Project 2017/266). The recruitment began in December 2017 and will be completed by December 2020. CONCLUSIONS: This is the first study to assess the effect hearing aid use has on neural, cognitive, and psychosocial factors in individuals with SNHL who have never used hearing aids. Furthermore, this study is expected to clarify the relationships among altered brain structure and function, psychosocial factors, and cognition in response to the hearing aid use. TRIAL REGISTRATION: Australian New Zealand Clinical Trials Registry: ACTRN12617001616369; https://anzctr.org.au/Trial/Registration/TrialReview.aspx?ACTRN=12617001616369 (Accessed by WebCite at http://www.webcitation.org/70yatZ9ze). INTERNATIONAL REGISTERED REPORT IDENTIFIER (IRRID): RR1-10.2196/9916.
- Item‘Like Pots and Pans Falling Down the Stairs’. Experience of Music Composed for Listeners with Cochlear Implants in a Live Concert Setting(Routledge, 2014-06-12) Schubert, Emery; Marozeau, Jeremy; Stevens, Catherine; Innes-Brown, HamishThis study investigated whether music specially written for people with cochlear implants (CIs) could be used to better pinpoint how the music listening experience for a CI was different to a normal hearing listener (NH). After the specially arranged live concert, focus groups were formed from audience volunteers (two groups each of CIs, NHs and people with a range of hearing assistant devices). The theme of musical features (MF) was reported most frequently for both NHs and CIs. Valence analysis identified no significant difference in positive comments aboutMFby CIs thanNHsfor the specially commissioned works. Spatialization, although reported infrequently, was considered important by some CI, NH and bimodal listeners (who use a cochlear implant and a hearing aid). Rhythm was enjoyed by both NH and CI groups, and percussion instruments liked more than other musical instruments, but more so by CIs. Bilateral and bimodal CIs expressed interest in optimizing the hearing assistance settings, but on several occasions, the optimization ended with turning the contralateral hearing aid off. The study identifies the possible critical role of familiarity in music enjoyment.
- ItemMusic for the cochlear implant: Audience response to six commissioned compositions.(Thieme Medical Publishers, 2012-11) Au, Agnes; Marozeau, Jeremy; Innes-Brown, Hamish; Schubert, Emery; Stevens, CatherineAlthough cochlear implant (CI) users enjoy good speech understanding, music perception is still difficult or unpleasant for many. This study aimed to assess cognitive, engagement, and auditory responses to new music composed specifically for CI users. From 407 concertgoers who completed a questionnaire, responses from groups of normally hearing listeners (n = 44) and CI users (n = 44), matched in age and musical ability, were compared to determine whether specially commissioned works would elicit similar responses from both groups. No significant group differences were found on measures of interest, enjoyment, and musicality, whereas ratings of understanding and instrument localization and recognition were significantly lower for CI users. Overall, ratings of the music were typically higher for percussion pieces. The concert successfully elicited similar responses from both groups in terms of interest, enjoyment, and musicality, although technical aspects, such as understanding, localization, and instrument identification, continue to be problematic for CI users.
- ItemThe relationship between multisensory integration and IQ in children(American Psychological Association, 2010-12-13) Barutchu, Ayla; Crewther, Sheila; Fifer, Joanne; Shivdasani, Mohit; Innes-Brown, Hamish; Toohey, Sarah; Danaher, Jaclyn; Paolini, AntonioIt is well accepted that multisensory integration has a facilitative effect on perceptual and motor processes, evolutionarily enhancing the chance of survival of many species, including humans. Yet, there is a limited understanding of the relationship between multisensory processes, environmental noise and children’s cognitive abilities. Thus, this study investigated the relationship between multisensory integration, auditory background noise and the general intellectual abilities of school age children (N = 88, M age = 9 years, 7 months) using a simple audiovisual detection paradigm. We provide evidence that children with enhanced multisensory integration in quiet and noisy conditions are likely to score above average on the full-scale Wechsler Intelligence Scale for Children (WISC-IV). Conversely, ~ 45% of tested children, with relatively low verbal and non-verbal intellectual abilities, showed reduced multisensory integration in either quiet or noise. Interestingly, ~ 20% of children showed improved multisensory integration abilities in the presence of auditory background noise. The findings of the present study suggest that stable and consistent multisensory integration in quiet and noisy environments is associated with the development of optimal general intellectual abilities. Further theoretical implications are discussed.