The goal of research in the NT lab is to understand the contributions of sensory neuron activity to perception and behavior. The underlying premise of our research is that such understanding necessarily involves interpreting large-scale recordings in sufficiently complex (“natural”) experimental contexts, ideally that engage goal-directed behavior. As a result, we specialize in statistical modeling, which are a means to directly test hypotheses about neural computations and physiological elements of the system using recorded neural activity. The scope of our work has dramatically expanded in the last few years enabled to technological advances in both recording technology (large-scale physiological recordings) and machine learning tools. With our novel approaches now largely established, we have been working with the most promising experimental collaborations that could yield insights into fundamental aspects of sensory (and sensorimotor) brain function that have been largely inaccessible.
Current projects
-
Cortical processing of vision
Projects with Bevil Conway (NEI), Bruce Cumming (NEI) -
Visual processing at the center-of-gaze
Projects with Jude Mitchell and Michele Rucci (U Rochester) -
Contextual modulation of sensory processing
Projects with Hendrikje Nienborg (NEI) -
Active sensation in vision and whisking
Projects with Scott Pluta (Perdue U), Farran Briggs and Ralf Haefner (U Rochester) -
Implications of deep learning for cortical processing, and vice versa
Deep neural networks (DNNs) were inspired by the way it was thought the human visual system works. Now that DNNs are successful across many application areas (including vision), what have we learned about the visual system from them? Conversely, how might be learn from brain-based sensory processing to understand and improve machine learning? [More coming soon.] -
Computation pre-cortical vision and its underlying circuits (retina and LGN)
Projects with Josh Singer (UMD) and Jonathan Demb (Yale).