Cortical processing of vision

High-acuity vision

Almost nothing is known about the mechanisms of vision at the center-of-gaze, and yet that knowledge is fundamental to understanding how vision works. For example, our visual perception seems to be at the resolution that is only achievable at the center-of-gaze (or "fovea"), with eye- movements constantly redirecting the fovea to regions of interest. Thus, this knowledge of how the primary visual cortex (V1) processes this information has obvious basic science relevance, as well as potential clinical relevance. The proposed work overcomes myriad technical challenges to provide the first detailed look at the how the foveal V1 processes visual information, using recordings in the macaque monkey, the most common model of human trichromatic vision. The work involves both methodological and intellectual innovation; it will be the product of a team approach that unites scientists with the required computational and empirical expertise. Because almost all knowledge of the visual system is based outside of the fovea, we hypothesize to discover new mechanisms of visual processing dictated by the unique constraints of processing high-acuity color vision.

Experimental collaborator: Bevil Conway (NEI). Recently funded by NIH -- stay tuned!

Color vision

Color and form are often treated as separable features of an image. One can recognize shapes in achromatic photographs and conceptualize the color of an object abstracted from shape. Yet color-specific processing is embedded throughout the visual pathway from the first stage of the visual pathway, where three different types of light sensors (“cones”) with sensitivity to different parts of the visual spectrum initially convert light into electrical impulses. The color of a given point can in principle be determined by comparing the activation of the three different cone types, but the separate color channels are maintained until the primary visual cortex (V1), where they are finally combined in neurons that concurrently have sensitivity to different spatial patterns. Indeed, while it was initially thought that color and form were processed through separate pathways within V1, recent experiments have highlighted that a surprising fraction of V1 neurons mixes them together in a diversity of ways. Exactly how the mixing occurs, and for what purpose, are critical open questions in understanding human vision, and have been difficult to answer because such mixing is too complicated to characterize using traditional approaches. This project will combine large-scale recording of V1 neural activity during tailored “spatio-chromatic” visual stimulation with new computational approaches, which will both offer an unprecedented high-resolution description of color processing within V1, while determining the underlying function of spatio-chromatic mixing in supporting natural color vision.

Experimental collaborator: Bevil Conway (NEI)

Binocular integration and disparity selectivity

Despite having two distinct visual sensors — the eyes — visual perception usually consists of a single fused image. This requires the visual system to combine disparate “monocular” information from each retina into a “binocular” representation: a process thought to largely occur in the primary visual cortex (V1), whose inputs from the LGN are monocular, and its outputs are largely binocular. Combining information from both eyes is not trivial, because objects at different depths will have binocular disparity, that is, they will have a shifted position in both eyes. The visual system must therefore take disparity into account when combining information from each eye, and indeed V1 has a large number of disparity-tuned neurons. However, despite a number of conceptual models for how disparity processing *should* occur, no model thus far has been able to adequately reproduce the disparity tuning of V1 neurons using modern physiological approaches.

Experimental collaborator: Bruce Cumming (NEI)