Perceptual learning in a non-human primate model of artificial vision

Nathaniel J. Killian, Milena Vurro, Sarah B. Keith, Margee J. Kyada, John S. Pezaris

Research output: Contribution to journalArticlepeer-review

15 Scopus citations

Abstract

Visual perceptual grouping, the process of forming global percepts from discrete elements, is experience-dependent. Here we show that the learning time course in an animal model of artificial vision is predicted primarily from the density of visual elements. Three naïve adult non-human primates were tasked with recognizing the letters of the Roman alphabet presented at variable size and visualized through patterns of discrete visual elements, specifically, simulated phosphenes mimicking a thalamic visual prosthesis. The animals viewed a spatially static letter using a gaze-contingent pattern and then chose, by gaze fixation, between a matching letter and a non-matching distractor. Months of learning were required for the animals to recognize letters using simulated phosphene vision. Learning rates increased in proportion to the mean density of the phosphenes in each pattern. Furthermore, skill acquisition transferred from trained to untrained patterns, not depending on the precise retinal layout of the simulated phosphenes. Taken together, the findings suggest that learning of perceptual grouping in a gaze-contingent visual prosthesis can be described simply by the density of visual activation.

Original languageEnglish (US)
Article number36329
JournalScientific reports
Volume6
DOIs
StatePublished - Nov 22 2016
Externally publishedYes

ASJC Scopus subject areas

  • General

Fingerprint

Dive into the research topics of 'Perceptual learning in a non-human primate model of artificial vision'. Together they form a unique fingerprint.

Cite this