Browsing by Author "McCarthy, Chris"
Now showing 1 - 5 of 5
Results Per Page
Sort Options
- ItemDetermining the Contribution of Retinotopic Discrimination to Localization Performance With a Suprachoroidal Retinal Prosthesis(IOVS, 2017-06) Petoe, Matthew; McCarthy, Chris; Shivdasani, Mohit; Sinclair, Nicholas; Scott, Adele; Ayton, Lauren; Barnes, Nick; Bionic Vision Australia ConsortiumPurpose: With a retinal prosthesis connected to a head-mounted camera, subjects can perform low vision tasks using a combination of electrode discrimination and head-directed localization. The objective of the present study was to investigate the contribution of retinotopic electrode discrimination (perception corresponding to the arrangement of the implanted electrodes with respect to their position beneath the retina) to visual performance for three recipients of a 24-channel suprachoroidal retinal implant. Proficiency in retinotopic discrimination may allow good performance with smaller head movements, and identification of this ability would be useful for targeted rehabilitation. Methods: Three participants with retinitis pigmentosa performed localization and grating acuity assessments using a suprachoroidal retinal prosthesis. We compared retinotopic and nonretinotopic electrode mapping and hypothesized that participants with measurable acuity in a normal retinotopic condition would be negatively impacted by the nonretinotopic condition. We also expected that participants without measurable acuity would preferentially use head movement over retinotopic information. Results: Only one participant was able to complete the grating acuity task. In the localization task, this participant exhibited significantly greater head movements and significantly lower localization scores when using the nonretinotopic electrode mapping. There was no significant difference in localization performance or head movement for the remaining two subjects when comparing retinotopic to nonretinotopic electrode mapping. Conclusions: Successful discrimination of retinotopic information is possible with a suprachoroidal retinal prosthesis. Head movement behavior during a localization task can be modified using a nonretinotopic mapping. Behavioral comparisons using retinotopic and nonretinotopic electrode mapping may be able to highlight deficiencies in retinotopic discrimination, with a view to address these deficiencies in a rehabilitation environment. (ClinicalTrials.gov number, NCT01603576).
- ItemHarmonization of Outcomes and Vision Endpoints in Vision Restoration Trials: Recommendations from the International HOVER Taskforce(ARVO, 2020-07) Ayton, Lauren; Rizzo, Joseph; Bailey, Ian; Colenbrander, August; Dagnelie, Gislin; Geruschat, Duane; Hessburg, Philip; McCarthy, Chris; Petoe, Matthew; Rubin, Gary; Troyk, Philip; HOVER International Task ForceTranslational research in vision prosthetics, gene therapy, optogenetics, stem cell and other forms of transplantation, and sensory substitution is creating new therapeutic options for patients with neural forms of blindness. The technical challenges faced by each of these disciplines differ considerably, but they all face the same challenge of how to assess vision in patients with ultra-low vision (ULV), who will be the earliest subjects to receive new therapies. Historically, there were few tests to assess vision in ULV patients. In the 1990s, the field of visual prosthetics expanded rapidly, and this activity led to a heightened need to develop better tests to quantify end points for clinical studies. Each group tended to develop novel tests, which made it difficult to compare outcomes across groups. The common lack of validation of the tests and the variable use of controls added to the challenge of interpreting the outcomes of these clinical studies. In 2014, at the bi-annual International “Eye and the Chip” meeting of experts in the field of visual prosthetics, a group of interested leaders agreed to work cooperatively to develop the International Harmonization of Outcomes and Vision Endpoints in Vision Restoration Trials (HOVER) Taskforce. Under this banner, more than 80 specialists across seven topic areas joined an effort to formulate guidelines for performing and reporting psychophysical tests in humans who participate in clinical trials for visual restoration. This document provides the complete version of the consensus opinions from the HOVER taskforce, which, together with its rules of governance, will be posted on the website of the Henry Ford Department of Ophthalmology (www.artificialvision.org). Research groups or companies that choose to follow these guidelines are encouraged to include a specific statement to that effect in their communications to the public. The Executive Committee of the HOVER Taskforce will maintain a list of all human psychophysical research in the relevant fields of research on the same website to provide an overview of methods and outcomes of all clinical work being performed in an attempt to restore vision to the blind. This website will also specify which scientific publications contain the statement of certification. The website will be updated every 2 years and continue to exist as a living document of worldwide efforts to restore vision to the blind. The HOVER consensus document has been written by over 80 of the world's experts in vision restoration and low vision and provides recommendations on the measurement and reporting of patient outcomes in vision restoration trials.
- ItemImproved visual performance in letter perception through edge orientation encoding in a retinal prosthesis simulation(IOP Publishing, 2014-10) Kiral-Kornek, Isabell; O'Sullivan-Green, Elma; Savage, Craig; McCarthy, Chris; Grayden, David; Burkitt, AnthonyObjective. Stimulation strategies for retinal prostheses predominately seek to directly encode image brightness values rather than edge orientations. Recent work suggests that the generation of oriented elliptical phosphenes may be possible by controlling interactions between neighboring electrodes. Based on this, we propose a novel stimulation strategy for prosthetic vision that extracts edge orientation information from the intensity image and encodes it as oriented elliptical phosphenes. We test the hypothesis that encoding edge orientation via oriented elliptical phosphenes leads to better alphabetic letter recognition than standard intensity-based encoding. Approach. We conduct a psychophysical study with simulated phosphene vision with 12 normal-sighted volunteers. The two stimulation strategies were compared with variations of letter size, electrode drop-out and spatial offsets of phosphenes. Main results. Mean letter recognition accuracy was significantly better with the new proposed stimulation strategy (65%) compared to direct grayscale encoding (47%). All examined parameters-stimulus size, phosphene dropout, and location shift-were found to influence the performance, with significant two-way interactions between phosphene dropout and stimulus size as well as between phosphene dropout and phosphene location shift. The analysis delivers a model of perception performance. Significance. Displaying available directional information to an implant user may improve their visual performance. We present a model for designing a stimulation strategy under the constraints of existing retinal prostheses that can be exploited by retinal implant developers to strategically employ oriented phosphenes.
- ItemSensory augmentation to aid training with retinal prostheses(IOP Publishing, 2020-07) Kvansakul, Jessica; Hamilton, Lachlan; Ayton, Lauren; McCarthy, Chris; Petoe, MatthewOBJECTIVE: Retinal prosthesis recipients require rehabilitative training to learn the non-intuitive nature of prosthetic 'phosphene vision'. This study investigated whether the addition of auditory cues, using The vOICe sensory substitution device (SSD), could improve functional performance with simulated phosphene vision. APPROACH: Forty normally sighted subjects completed two visual tasks under three conditions. The phosphene condition converted the image to simulated phosphenes displayed on a virtual reality headset. The SSD condition provided auditory information via stereo headphones, translating the image into sound. Horizontal information was encoded as stereo timing differences between ears, vertical information as pitch, and pixel intensity as audio intensity. The third condition combined phosphenes and SSD. Tasks comprised light localisation from the Basic Assessment of Light and Motion (BaLM) and the Tumbling-E from the Freiburg Acuity and Contrast Test (FrACT). To examine learning effects, twenty of the forty subjects received SSD training prior to assessment. MAIN RESULTS: Combining phosphenes with auditory SSD provided better light localisation accuracy than either phosphenes or SSD alone, suggesting a compound benefit of integrating modalities. Although response times for SSD-only were significantly longer than all other conditions, combined condition response times were as fast as phosphene-only, highlighting that audio-visual integration provided both response time and accuracy benefits. Prior SSD training provided a benefit to localisation accuracy and speed in SSD-only (as expected) and Combined conditions compared to untrained SSD-only. Integration of the two modalities did not improve spatial resolution task performance, with resolution limited to that of the higher resolution modality (SSD). SIGNIFICANCE: Combining phosphene (visual) and SSD (auditory) modalities was effective even without SSD training and led to an improvement in light localisation accuracy and response times. Spatial resolution performance was dominated by auditory SSD. The results suggest there may be a benefit to including auditory cues when training vision prosthesis recipients.
- ItemVision function testing for a suprachoroidal retinal prosthesis: effects of image filtering(IOP Publishing, 2016-04) Barnes, Nick; Scott, Adele; Lieby, Paulette; Petoe, Matthew; McCarthy, Chris; Stacey, Ashley; Ayton, Lauren; Sinclair, Nicholas; Shivdasani, Mohit; Lovell, Nigel; McDermott, Hugh; Walker, Janine; BVA ConsortiumOBJECTIVE: One strategy to improve the effectiveness of prosthetic vision devices is to process incoming images to ensure that key information can be perceived by the user. This paper presents the first comprehensive results of vision function testing for a suprachoroidal retinal prosthetic device utilizing of 20 stimulating electrodes. Further, we investigate whether using image filtering can improve results on a light localization task for implanted participants compared to minimal vision processing. No controlled implanted participant studies have yet investigated whether vision processing methods that are not task-specific can lead to improved results. APPROACH: Three participants with profound vision loss from retinitis pigmentosa were implanted with a suprachoroidal retinal prosthesis. All three completed multiple trials of a light localization test, and one participant completed multiple trials of acuity tests. The visual representations used were: Lanczos2 (a high quality Nyquist bandlimited downsampling filter); minimal vision processing (MVP); wide view regional averaging filtering (WV); scrambled; and, system off. MAIN RESULTS: Using Lanczos2, all three participants successfully completed a light localization task and obtained a significantly higher percentage of correct responses than using MVP ([Formula: see text]) or with system off ([Formula: see text]). Further, in a preliminary result using Lanczos2, one participant successfully completed grating acuity and Landolt C tasks, and showed significantly better performance ([Formula: see text]) compared to WV, scrambled and system off on the grating acuity task. SIGNIFICANCE: Participants successfully completed vision tasks using a 20 electrode suprachoroidal retinal prosthesis. Vision processing with a Nyquist bandlimited image filter has shown an advantage for a light localization task. This result suggests that this and targeted, more advanced vision processing schemes may become important components of retinal prostheses to enhance performance. ClinicalTrials.gov Identifier: NCT01503576.