Cortical processing of vision

Color vision

Color and form are often treated as separable features of an image. One can recognize shapes in achromatic photographs and conceptualize the color of an object abstracted from shape. Yet color-specific processing is embedded throughout the visual pathway from the first stage of the visual pathway, where three different types of light sensors (“cones”) with sensitivity to different parts of the visual spectrum initially convert light into electrical impulses. The color of a given point can in principle be determined by comparing the activation of the three different cone types, but the separate color channels are maintained until the primary visual cortex (V1), where they are finally combined in neurons that concurrently have sensitivity to different spatial patterns. Indeed, while it was initially thought that color and form were processed through separate pathways within V1, recent experiments have highlighted that a surprising fraction of V1 neurons mixes them together in a diversity of ways. Exactly how the mixing occurs, and for what purpose, are critical open questions in understanding human vision, and have been difficult to answer because such mixing is too complicated to characterize using traditional approaches. This project will combine large-scale recording of V1 neural activity during tailored “spatio-chromatic” visual stimulation with new computational approaches, which will both offer an unprecedented high-resolution description of color processing within V1, while determining the underlying function of spatio-chromatic mixing in supporting natural color vision.

Experimental collaborator: Bevil Conway (NEI)

Binocular integration and disparity selectivity

Despite having two distinct visual sensors — the eyes — visual perception usually consists of a single fused image. This requires the visual system to combine disparate “monocular” information from each retina into a “binocular” representation: a process thought to largely occur in the primary visual cortex (V1), whose inputs from the LGN are monocular, and its outputs are largely binocular. Combining information from both eyes is not trivial, because objects at different depths will have binocular disparity, that is, they will have a shifted position in both eyes. The visual system must therefore take disparity into account when combining information from each eye, and indeed V1 has a large number of disparity-tuned neurons. However, despite a number of conceptual models for how disparity processing *should* occur, no model thus far has been able to adequately reproduce the disparity tuning of V1 neurons using modern physiological approaches.

Experimental collaborator: Bruce Cumming (NEI)

Statistical models of visual neurons (general)

With modern neurophysiological methods able to record neural activity throughout the visual pathway in the context of arbitrarily complex visual stimulation, our understanding of visual system function is becoming lim- ited by the available models of visual neurons that can be directly related to such data. Different forms of statistical models are now being used to probe the cellular and circuit mechanisms shaping neural activity, understand how neural selectivity to complex visual features is computed, and derive the ways in which neurons contribute to systems-level visual processing. However, models that are able to more accurately reproduce observed neural activity often defy simple interpretations. As a result, rather than being used solely to connect with existing theories of visual processing, statistical modeling will increasingly drive the evolution of more sophisticated theories.