Browsing by Author "McKay, Colette"
Now showing 1 - 20 of 34
Results Per Page
Sort Options
- ItemApplications of Phenomenological Loudness Models to Cochlear Implants(Frontiers, 2021-01) McKay, ColetteCochlear implants electrically stimulate surviving auditory neurons in the cochlea to provide severely or profoundly deaf people with access to hearing. Signal processing strategies derive frequency-specific information from the acoustic signal and code amplitude changes in frequency bands onto amplitude changes of current pulses emitted by the tonotopically arranged intracochlear electrodes. This article first describes how parameters of the electrical stimulation influence the loudness evoked and then summarizes two different phenomenological models developed by McKay and colleagues that have been used to explain psychophysical effects of stimulus parameters on loudness, detection, and modulation detection. The Temporal Model is applied to single-electrode stimuli and integrates cochlear neural excitation using a central temporal integration window analogous to that used in models of normal hearing. Perceptual decisions are made using decision criteria applied to the output of the integrator. By fitting the model parameters to a variety of psychophysical data, inferences can be made about how electrical stimulus parameters influence neural excitation in the cochlea. The Detailed Model is applied to multi-electrode stimuli, and includes effects of electrode interaction at a cochlear level and a transform between integrated excitation and specific loudness. The Practical Method of loudness estimation is a simplification of the Detailed Model and can be used to estimate the relative loudness of any multi-electrode pulsatile stimuli without the need to model excitation at the cochlear level. Clinical applications of these models to novel sound processing strategies are described.
- ItemAssessing hearing by measuring heartbeat: The effect of sound level(PLoS One, 2019-03) Shoushtarian, Mehrnaz; Weder, Stefan; Innes-Brown, Hamish; McKay, ColetteFunctional near-infrared spectroscopy (fNIRS) is a non-invasive brain imaging technique that measures changes in oxygenated and de-oxygenated hemoglobin concentration and can provide a measure of brain activity. In addition to neural activity, fNIRS signals contain components that can be used to extract physiological information such as cardiac measures. Previous studies have shown changes in cardiac activity in response to different sounds. This study investigated whether cardiac responses collected using fNIRS differ for different loudness of sounds. fNIRS data were collected from 28 normal hearing participants. Cardiac response measures evoked by broadband, amplitude-modulated sounds were extracted for four sound intensities ranging from near-threshold to comfortably loud levels (15, 40, 65 and 90 dB Sound Pressure Level (SPL)). Following onset of the noise stimulus, heart rate initially decreased for sounds of 15 and 40 dB SPL, reaching a significantly lower rate at 15 dB SPL. For sounds at 65 and 90 dB SPL, increases in heart rate were seen. To quantify the timing of significant changes, inter-beat intervals were assessed. For sounds at 40 dB SPL, an immediate significant change in the first two inter-beat intervals following sound onset was found. At other levels, the most significant change appeared later (beats 3 to 5 following sound onset). In conclusion, changes in heart rate were associated with the level of sound with a clear difference in response to near-threshold sounds compared to comfortably loud sounds. These findings may be used alone or in conjunction with other measures such as fNIRS brain activity for evaluation of hearing ability.
- ItemAudio-visual integration in cochlear implant listeners and the effect of age difference(Acoustical Society of America, 2019-12) Zhou, Xin; Innes-Brown, Hamish; McKay, ColetteThis study aimed to investigate differences in audio-visual (AV) integration between cochlear implant (CI) listeners and normal-hearing (NH) adults. A secondary aim was to investigate the effect of age differences by examining AV integration in groups of older and younger NH adults. Seventeen CI listeners, 13 similarly aged NH adults, and 16 younger NH adults were recruited. Two speech identification experiments were conducted to evaluate AV integration of speech cues. In the first experiment, reaction times in audio-alone (A-alone), visual-alone (V-alone), and AV conditions were measured during a speeded task in which participants were asked to identify a target sound /aSa/ among 11 alternatives. A race model was applied to evaluate AV integration. In the second experiment, identification accuracies were measured using a closed set of consonants and an open set of consonant-nucleus-consonant words. The authors quantified AV integration using a combination of a probability model and a cue integration model (which model participants' AV accuracy by assuming no or optimal integration, respectively). The results found that experienced CI listeners showed no better AV integration than their similarly aged NH adults. Further, there was no significant difference in AV integration between the younger and older NH adults
- ItemAuditory Brainstem Representation of the Voice Pitch Contours in the Resolved and Unresolved Components of Mandarin Tones(Frontiers in Neuroscience, 2018-12) Peng, Fei; McKay, Colette; Mao, Darren; Hou, Wensheng; Innes-Brown, HamishAccurate perception of voice pitch plays a vital role in speech understanding, especially for tonal languages such as Mandarin. Lexical tones are primarily distinguished by the fundamental frequency (F0) contour of the acoustic waveform. It has been shown that the auditory system could extract the F0 from the resolved and unresolved harmonics, and the tone identification performance of resolved harmonics was better than unresolved harmonics. To evaluate the neural response to the resolved and unresolved components of Mandarin tones in quiet and in speech-shaped noise, we recorded the frequency-following response. In this study, four types of stimuli were used: speech with either only-resolved harmonics or only-unresolved harmonics, both in quiet and in speech-shaped noise. Frequency-following responses (FFRs) were recorded to alternating-polarity stimuli and were added or subtracted to enhance the neural response to the envelope (FFRENV) or fine structure (FFRTFS), respectively. The neural representation of the F0 strength reflected by the FFRENV was evaluated by the peak autocorrelation value in the temporal domain and the peak phase-locking value (PLV) at F0 in the spectral domain. Both evaluation methods showed that the FFRENV F0 strength in quiet was significantly stronger than in noise for speech including unresolved harmonics, but not for speech including resolved harmonics. The neural representation of the temporal fine structure reflected by the FFRTFS was assessed by the PLV at the harmonic near to F1 (4th of F0). The PLV at harmonic near to F1 (4th of F0) of FFRTFS to resolved harmonics was significantly larger than to unresolved harmonics. Spearman’s correlation showed that the FFRENV F0 strength to unresolved harmonics was correlated with tone identification performance in noise (0 dB SNR). These results showed that the FFRENV F0 strength to speech sounds with resolved harmonics was not affected by noise. In contrast, the response to speech sounds with unresolved harmonics, which were significantly smaller in noise compared to quiet. Our results suggest that coding resolved harmonics was more important than coding envelope for tone identification performance in noise.
- ItemClinical validation of a precision electromagnetic tremor measurement system in participants receiving deep brain stimulation for essential tremor(IOP Publishing, 2016-08) Perera, Thushara; Yohanandan, Shivanthan; Thevathasan, Wesley; Jones, Mary; Peppard, Richard; Evans, Andrew; Tan, Joy; McKay, Colette; McDermott, HughTremor is characterized commonly through subjective clinical rating scales. Accelerometer-based techniques for objective tremor measurement have been developed in the past, yet these measures are usually presented as an unintuitive dimensionless index without measurement units. Here we have developed a tool (TREMBAL) to provide quantifiable and objective measures of tremor severity using electromagnetic motion tracking. We aimed to compare TREMBAL's objective measures with clinical tremor ratings and determine the test-retest reliability of our technique. Eight participants with ET receiving deep brain stimulation (DBS) therapy were consented. Tremor was simultaneously recorded using TREMBAL and video during DBS adjustment. After each adjustment, participants performed a hands-outstretched task (for postural tremor) and a finger-nose task (for kinetic tremor). Video recordings were de-identified, randomized, and shown to a panel of movement disorder specialists to obtain their ratings. Regression analysis and Pearson's correlations were used to determine agreement between datasets. Subsets of the trial were repeated to assess test-retest reliability. Tremor amplitude and velocity measures were in close agreement with mean clinical ratings (r > 0.90) for both postural and kinetic tremors. Test-retest reliability for both translational and rotational components of tremor showed intra-class correlations >0.80. TREMBAL assessments showed that tremor gradually improved with increasing DBS therapy-this was also supported by clinical observation. TREMBAL measurements are a sensitive, objective and reliable assessment of tremor severity. This tool may have application in clinical trials and in aiding automated optimization of deep brain stimulation.
- ItemComment on: Short pulse width widens the therapeutic window of subthalamic neurostimulation(John Wiley and Sons, 2015-09-11) McDermott, Hugh; McKay, Colette
- ItemComparing fNIRS signal qualities between approaches with and without short channels(Plos One, 2020-12) Zhou, Xin; Sobczak, Gabriel; McKay, Colette; Litovsky, RuthFunctional near-infrared spectroscopy (fNIRS) is a non-invasive technique used to measure changes in oxygenated (HbO) and deoxygenated (HbR) hemoglobin, related to neuronal activity. fNIRS signals are contaminated by the systemic responses in the extracerebral tissue (superficial layer) of the head, as fNIRS uses a back-reflection measurement. Using shorter channels that are only sensitive to responses in the extracerebral tissue but not in the deeper layers where target neuronal activity occurs has been a 'gold standard' to reduce the systemic responses in the fNIRS data from adults. When shorter channels are not available or feasible for implementation, an alternative, i.e., anti-correlation (Anti-Corr) method has been adopted. To date, there has not been a study that directly assesses the outcomes from the two approaches. In this study, we compared the Anti-Corr method with the 'gold standard' in reducing systemic responses to improve fNIRS neural signal qualities. We used eight short channels (8-mm) in a group of adults, and conducted a principal component analysis (PCA) to extract two components that contributed the most to responses in the 8 short channels, which were assumed to contain the global components in the extracerebral tissue. We then used a general linear model (GLM), with and without including event-related regressors, to regress out the 2 principal components from regular fNIRS channels (30 mm), i.e., two GLM-PCA methods. Our results found that, the two GLM-PCA methods showed similar performance, both GLM-PCA methods and the Anti-Corr method improved fNIRS signal qualities, and the two GLM-PCA methods had better performance than the Anti-Corr method.
- ItemConnectivity in Language Areas of the Brain in Cochlear Implant Users as Revealed by fNIRS(Springer International Publishing, 2015-04) McKay, Colette; Shah, Adnan; Seghouane, Abd-Krim; Zhou, Xin; Cross, William; Litovsky, RuthMany studies, using a variety of imaging techniques, have shown that deafness induces functional plasticity in the brain of adults with late-onset deafness, and in children changes the way the auditory brain develops. Cross modal plasticity refers to evidence that stimuli of one modality (e.g. vision) activate neural regions devoted to a different modality (e.g. hearing) that are not normally activated by those stimuli. Other studies have shown that multimodal brain networks (such as those involved in language comprehension, and the default mode network) are altered by deafness, as evidenced by changes in patterns of activation or connectivity within the networks. In this paper, we summarise what is already known about brain plasticity due to deafness and propose that functional near-infra-red spectroscopy (fNIRS) is an imaging method that has potential to provide prognostic and diagnostic information for cochlear implant users. Currently, patient history factors account for only 10 % of the variation in post-implantation speech understanding, and very few post-implantation behavioural measures of hearing ability correlate with speech understanding. As a non-invasive, inexpensive and user-friendly imaging method, fNIRS provides an opportunity to study both pre- and post-implantation brain function. Here, we explain the principle of fNIRS measurements and illustrate its use in studying brain network connectivity and function with example data
- ItemContralateral dominance to speech in the adult auditory cortex immediately after cochlear implantation(iScience, 2022-07-08) Shader, Maureen; Luke, Robert; McKay, ColetteSensory deprivation causes structural and functional changes in the human brain. Cochlear implantation delivers immediate reintroduction of auditory sensory information. Previous reports have indicated that over a year is required for the brain to reestablish canonical cortical processing patterns after the reintroduction of auditory stimulation. We utilized functional near-infrared spectroscopy (fNIRS) to investigate brain activity to natural speech stimuli directly after cochlear implantation. We presented 12 cochlear implant recipients, who each had a minimum of 12 months of auditory deprivation, with unilateral auditory- and visual-speech stimuli. Regardless of the side of implantation, canonical responses were elicited primarily on the contralateral side of stimulation as early as 1 h after device activation. These data indicate that auditory pathway connections are sustained during periods of sensory deprivation in adults, and that typical cortical lateralization is observed immediately following the reintroduction of auditory sensory input.
- ItemCortical auditory evoked potential time-frequency growth functions for fully objective hearing threshold estimation(Elsevier, Inc., 2018-12) Mao, Darren; Innes-Brown, Hamish; Petoe, Matthew; Wong, Yan; McKay, ColetteCortical auditory evoked potential (CAEPs) thresholds have been shown to correlate well with behaviourally determined hearing thresholds. Growth functions of CAEPs show promise as an alternative to single level detection for objective hearing threshold estimation; however, the accuracy and clinical relevance of this method is not well examined. In this study, we used temporal and spectral CAEP features to generate feature growth functions. Spectral features may be more robust than traditional peak-picking methods where CAEP morphology is variable, such as in children or hearing device users. Behavioural hearing thresholds were obtained and CAEPs were recorded in response to a 1 kHz puretone from twenty adults with no hearing loss. Four features, peak-to-peak amplitude, root-mean-square, peak spectral power and peak phase-locking value (PLV) were extracted from the CAEPs. Functions relating each feature with stimulus level were used to calculate objective hearing threshold estimates. We assessed the performance of each feature by calculating the difference between the objective estimate and the behaviourally-determined threshold. We compared the accuracy of the estimates using each feature and found that the peak PLV feature performed best, with a mean threshold error of 2.7 dB and standard deviation of 5.9 dB across subjects from behavioural threshold. We also examined the relation between recording time, data quality and threshold estimate errors, and found that on average for a single threshold, 12.7 minutes of recording was needed for a 95% confidence that the threshold estimate was within 20 dB of the behavioural threshold, using the peak-to-peak amplitude feature, while 14 minutes is needed for the peak PLV feature. These results show that the PLV of CAEPs can be used to find a clinically relevant hearing threshold estimate. Its potential stability in differing morphology may be an advantage in testing infants or cochlear implant users.
- ItemCortical auditory evoked potentials as an objective measure of behavioral thresholds in cochlear implant users(Elsevier B.V, 2015-09) Visram, Anisa; Innes-Brown, Hamish; El-deredy, Wael; McKay, ColetteThe aim of this study was to assess the suitability of using cortical auditory evoked potentials (CAEPs) as an objective tool for predicting behavioral hearing thresholds in cochlear implant (CI) users. Nine experienced adult CI users of Cochlear™ devices participated. Behavioral thresholds were measured in CI users across apical, mid and basal electrodes. CAEPs were measured for the same stimuli (50 ms pulse trains of 900-pps rate) at a range of input levels across the individual's psychophysical dynamic range (DR). Amplitude growth functions using global field power (GFP) were plotted, and from this the CAEP thresholds were extrapolated and compared to the behavioral thresholds. Increased amplitude and decreased latency of the N1–P2 response was seen with increasing input level. A strong correlation was found between CAEP and behavioral thresholds (r = 0.93), implying that the cortical response may be more useful as an objective programming tool for cochlear implants than the auditory nerve response.
- ItemCortical fNIRS Responses Can Be Better Explained by Loudness Percept than Sound Intensity(Wolters Kluwer Health, Inc, 2020-01) Weder, Stefan; Shoushtarian, Mehrnaz; Olivares, Virginia; Zhou, Xin; Innes-Brown, Hamish; McKay, ColetteOBJECTIVES: Functional near-infrared spectroscopy (fNIRS) is a brain imaging technique particularly suitable for hearing studies. However, the nature of fNIRS responses to auditory stimuli presented at different stimulus intensities is not well understood. In this study, we investigated whether fNIRS response amplitude was better predicted by stimulus properties (intensity) or individually perceived attributes (loudness). DESIGN: Twenty-two young adults were included in this experimental study. Four different stimulus intensities of a broadband noise were used as stimuli. First, loudness estimates for each stimulus intensity were measured for each participant. Then, the 4 stimulation intensities were presented in counterbalanced order while recording hemoglobin saturation changes from cortical auditory brain areas. The fNIRS response was analyzed in a general linear model design, using 3 different regressors: a non-modulated, an intensity-modulated, and a loudness-modulated regressor. RESULTS: Higher intensity stimuli resulted in higher amplitude fNIRS responses. The relationship between stimulus intensity and fNIRS response amplitude was better explained using a regressor based on individually estimated loudness estimates compared with a regressor modulated by stimulus intensity alone. CONCLUSIONS: Brain activation in response to different stimulus intensities is more reliant upon individual loudness sensation than physical stimulus properties. Therefore, in measurements using different auditory stimulus intensities or subjective hearing parameters, loudness estimates should be examined when interpreting results.
- ItemCortical Processing Related to Intensity of a Modulated Noise Stimulus—a Functional Near-Infrared Study(SpringerLink, 2018-04) Weder, Stefan; Zhou, Xin; Shoushtarian, Mehrnaz; Innes-Brown, Hamis; McKay, ColetteSound intensity is a key feature of auditory signals. A profound understanding of cortical processing of this feature is therefore highly desirable. This study investigates whether cortical functional near-infrared spectroscopy (fNIRS) signals reflect sound intensity changes and where on the brain cortex maximal intensity-dependent activations are located. The fNIRS technique is particularly suitable for this kind of hearing study, as it runs silently. Twenty-three normal hearing subjects were included and actively participated in a counterbalanced block design task. Four intensity levels of a modulated noise stimulus with long-term spectrum and modulation characteristics similar to speech were applied, evenly spaced from 15 to 90 dB SPL. Signals from auditory processing cortical fields were derived from a montage of 16 optodes on each side of the head. Results showed that fNIRS responses originating from auditory processing areas are highly dependent on sound intensity level: higher stimulation levels led to higher concentration changes. Caudal and rostral channels showed different waveform morphologies, reflecting specific cortical signal processing of the stimulus. Channels overlying the supramarginal and caudal superior temporal gyrus evoked a phasic response, whereas channels over Broca's area showed a broad tonic pattern. This data set can serve as a foundation for future auditory fNIRS research to develop the technique as a hearing assessment tool in the normal hearing and hearing-impaired populations.
- ItemCortical Speech Processing in Postlingually Deaf Adult Cochlear Implant Users, as Revealed by Functional Near-Infrared Spectroscopy(SAGE, 2018-07) Zhou, Xin; Seghouane, Abd-Krim; Shah, Adnan; Innes-Brown, Hamish; Cross, Will; Litovsky, Ruth; McKay, ColetteAn experiment was conducted to investigate the feasibility of using functional near-infrared spectroscopy (fNIRS) to image cortical activity in the language areas of cochlear implant (CI) users and to explore the association between the activity and their speech understanding ability. Using fNIRS, 15 experienced CI users and 14 normal-hearing participants were imaged while presented with either visual speech or auditory speech. Brain activation was measured from the prefrontal, temporal, and parietal lobe in both hemispheres, including the language-associated regions. In response to visual speech, the activation levels of CI users in an a priori region of interest (ROI)—the left superior temporal gyrus or sulcus—were negatively correlated with auditory speech understanding. This result suggests that increased cross-modal activity in the auditory cortex is predictive of poor auditory speech understanding. In another two ROIs, in which CI users showed significantly different mean activation levels in response to auditory speech compared with normal-hearing listeners, activation levels were significantly negatively correlated with CI users’ auditory speech understanding. These ROIs were located in the right anterior temporal lobe (including a portion of prefrontal lobe) and the left middle superior temporal lobe. In conclusion, fNIRS successfully revealed activation patterns in CI users associated with their auditory speech understanding.
- ItemEffect of input compression and input frequency response on music perception in cochlear implant users(Taylor & Francis, 2015-06-03) Halliwell, Emily; Jones, Linor; Fraser, Matthew; Lockley, Morag; Hill-Feltham, Penelope; McKay, ColetteObjective: A study was conducted to determine whether modifications to input compression and input frequency response characteristics can improve music-listening satisfaction in cochlear implant users. Design: Experiment 1 compared three pre-processed versions of music and speech stimuli in a laboratory setting: original, compressed, and flattened frequency response. Music excerpts comprised three music genres (classical, country, and jazz), and a running speech excerpt was compared. Experiment 2 implemented a flattened input frequency response in the speech processor program. In a take-home trial, participants compared unaltered and flattened frequency responses. Study sample: Ten and twelve adult Nucleus Freedom cochlear implant users participated in Experiments 1 and 2, respectively. Results: Experiment 1 revealed a significant preference for music stimuli with a flattened frequency response compared to both original and compressed stimuli, whereas there was a significant preference for the original (rising) frequency response for speech stimuli. Experiment 2 revealed no significant mean preference for the flattened frequency response, with 9 of 11 subjects preferring the rising frequency response. Conclusions: Input compression did not alter music enjoyment. Comparison of the two experiments indicated that individual frequency response preferences may depend on the genre or familiarity, and particularly whether the music contained lyrics.
- ItemThe effect of presentation level and stimulation rate on speech perception and modulation detection for cochlear implant users.(Acoustical Society of America, 2017-06) Brochier, Tim; McDermott, Hugh; McKay, ColetteIn order to improve speech understanding for cochlear implant users, it is important to maximize the transmission of temporal information. The combined effects of stimulation rate and presentation level on temporal information transfer and speech understanding remain unclear. The present study systematically varied presentation level (60, 50, and 40 dBA) and stimulation rate [500 and 2400 pulses per second per electrode (pps)] in order to observe how the effect of rate on speech understanding changes for different presentation levels. Speech recognition in quiet and noise, and acoustic amplitude modulation detection thresholds (AMDTs) were measured with acoustic stimuli presented to speech processors via direct audio input (DAI). With the 500 pps processor, results showed significantly better performance for consonant-vowel nucleus-consonant words in quiet, and a reduced effect of noise on sentence recognition. However, no rate or level effect was found for AMDTs, perhaps partly because of amplitude compression in the sound processor. AMDTs were found to be strongly correlated with the effect of noise on sentence perception at low levels. These results indicate that AMDTs, at least when measured with the CP910 Freedom speech processor via DAI, explain between-subject variance of speech understanding, but do not explain within-subject variance for different rates and levels.
- ItemEffect of Pulse Rate and Polarity on the Sensitivity of Auditory Brainstem and Cochlear Implant Users to Electrical Stimulation(Springer, 2015-07-03) Carlyon, Robert; Deeks, John; McKay, ColetteTo further understand the response of the human brainstem to electrical stimulation, a series of experiments compared the effect of pulse rate and polarity on detection thresholds between auditory brainstem implant (ABI) and cochlear implant (CI) patients. Experiment 1 showed that for 400-ms pulse trains, ABI users’ thresholds dropped by about 2 dB as pulse rate was increased from 71 to 500 pps, but only by an average of 0.6 dB as rate was increased further to 3500 pps. This latter decrease was much smaller than the 7.7-dB observed for CI users. A similar result was obtained for pulse trains with a 40-ms duration. Furthermore, experiment 2 showed that the threshold difference between 500- and 3500-pps pulse trains remained much smaller for ABI than for CI users, even for durations as short as 2 ms, indicating the effect of a fast-acting mechanism. Experiment 3 showed that ABI users’ thresholds were lower for alternating-polarity than for fixed-polarity pulse trains, and that this difference was greater at 3500 pps than at 500 pps, consistent with the effect of pulse rate on ABI users’ thresholds being influenced by charge interactions between successive biphasic pulses. Experiment 4 compared thresholds and loudness between trains of asymmetric pulses of opposite polarity, in monopolar mode, and showed that in both cases less current was needed when the anodic, rather than the cathodic, current was concentrated into a short time interval. This finding is similar to that previously observed for CI users and is consistent with ABI users being more sensitive to anodic than cathodic current. We argue that our results constrain potential explanations for the differences in the perception of electrical stimulation by CI and ABI users, and have potential implications for future ABI stimulation strategies.
- ItemEffect of Pulse Rate on Loudness Discrimination in Cochlear Implant Users(Association for Research in Otolaryngology, 2018-03) Azadpour, Mahan; McKay, Colette; Svirsky, MarioStimulation pulse rate affects current amplitude discrimination by cochlear implant (CI) users, indicated by the evidence that the JND (just noticeable difference) in current amplitude delivered by a CI electrode becomes larger at higher pulse rates. However, it is not clearly understood whether pulse rate would affect discrimination of speech intensities presented acoustically to CI processors, or what the size of this effect might be. Intensity discrimination depends on two factors: the growth of loudness with increasing sound intensity and the loudness JND (or the just noticeable loudness increment). This study evaluated the hypothesis that stimulation pulse rate affects loudness JND. This was done by measuring current amplitude JNDs in an experiment design based on signal detection theory according to which loudness discrimination is related to internal noise (which is manifested by variability in loudness percept in response to repetitions of the same physical stimulus). Current amplitude JNDs were measured for equally loud pulse trains of 500 and 3000 pps (pulses per second) by increasing the current amplitude of the target pulse train until it was perceived just louder than a same-rate or different-rate reference pulse train. The JND measures were obtained at two presentation levels. At the louder level, the current amplitude JNDs were affected by the rate of the reference pulse train in a way that was consistent with greater noise or variability in loudness perception for the higher pulse rate. The results suggest that increasing pulse rate from 500 to 3000 pps can increase loudness JND by 60 % at the upper portion of the dynamic range. This is equivalent to a 38 % reduction in the number of discriminable steps for acoustic and speech intensities.
- ItemElectrically evoked compound action potentials artefact rejection by independent component analysis: Procedure automation(Elsevier B.V, 2015-01-15) Akhoun, Idrick; McKay, Colette; El-deredy, WaelBACKGROUND: Independent-components-analysis (ICA) successfully separated electrically-evoked compound action potentials (ECAPs) from the stimulation artefact and noise (ECAP-ICA, Akhoun et al., 2013). NEW METHOD: This paper shows how to automate the ECAP-ICA artefact cancellation process. Raw-ECAPs without artefact rejection were consecutively recorded for each stimulation condition from at least 8 intra-cochlear electrodes. Firstly, amplifier-saturated recordings were discarded, and the data from different stimulus conditions (different current-levels) were concatenated temporally. The key aspect of the automation procedure was the sequential deductive source categorisation after ICA was applied with a restriction to 4 sources. The stereotypical aspect of the 4 sources enables their automatic classification as two artefact components, a noise and the sought ECAP based on theoretical and empirical considerations. RESULTS: The automatic procedure was tested using 8 cochlear implant (CI) users and one to four stimulus electrodes. The artefact and noise sources were successively identified and discarded, leaving the ECAP as the remaining source. The automated ECAP-ICA procedure successfully extracted the correct ECAPs compared to standard clinical forward masking paradigm in 22 out of 26 cases. COMPARISON WITH EXISTING METHOD(S): ECAP-ICA does not require extracting the ECAP from a combination of distinct buffers as it is the case with regular methods. It is an alternative that does not have the possible bias of traditional artefact rejections such as alternate-polarity or forward-masking paradigms. CONCLUSIONS: The ECAP-ICA procedure bears clinical relevance, for example as the artefact rejection sub-module of automated ECAP-threshold detection techniques, which are common features of CI clinical fitting software.
- ItemElectrode Selection and Speech Understanding in Patients With Auditory Brainstem Implants(Wolters Kluwer Health, Inc, 2015-07) McKay, Colette; Azadpour, Mahan; Jayewardene-Aston, Deanne; O'Driscoll, Martin; El-deredy, WaelObjectives: The objective of this study was to evaluate whether speech understanding in auditory brainstem implant (ABI) users who have a tumor pathology could be improved by the selection of a subset of electrodes that were appropriately pitch ranked and distinguishable. It was hypothesized that disordered pitch or spectral percepts and channel interactions may contribute significantly to the poor outcomes in most ABI users. Design: A single-subject design was used with five participants. Pitch ranking information for all electrodes in the patients’ clinic maps was obtained using a pitch ranking task and previous pitch ranking information from clinic sessions. A multidimensional scaling task was used to evaluate the stimulus space evoked by stimuli on the same set of electrodes. From this information, a subset of four to six electrodes was chosen and a new map was created, using just this subset, that the subjects took home for 1 month’s experience. Closed-set consonant and vowel perception and sentences in quiet were tested at three sessions: with the clinic map before the test map was given, after 1 month with the test map, and after an additional 2 weeks with their clinic map. Results: The results of the pitch ranking and multidimensional scaling procedures confirmed that the ABI users did not have a well-ordered set of percepts related to electrode position, thus supporting the proposal that difficulty in processing of spectral information may contribute to poor speech understanding. However, none of the subjects benefited from a map that reduced the stimulation electrode set to a smaller number of electrodes that were well ordered in place pitch. Conclusions: Although poor spectral processing may contribute to poor understanding in ABI users, it is not likely to be the sole contributor to poor outcomes.