We develop code for analyzing neurophysiology datasets in various contexts, with a focus on capturing nonlinear, physiologically relevant, computations performed in neural systems. Code is in both Matlab (historical) and Python.
The Nonlinear Input Model (NIM), published in McFarland et al. (PLoS CB, 2013). [More information]
A statistical model for describing nonlinear computation in sensory neurons. It is in form of an LNLN (linear-nonlinear x2) cascade, with its predicted firing rate given as a sum over nonlinear inputs followed by a spiking nonlinearity.
The Separable NIM (sNIM), published in Shi et al. (Sci Rep, 2019). [Github]
The sNIM is a version of the NIM (see above) that uses spatiotemporal filters comprised of space-time-separable elements. This allows for detailed spatiotemporal characterization of spatial and temporal sensitivity for neurons with receptive fields where this approximation is valid, such as in the retina.
The Rectified Latent Variable Model (RVLM) published in Whiteway and Butts (J Neurophys, 2017). [Github]
The Rectified Latent Variable Model (RLVM) is a probabilistic model for describing the activity of a large population of neurons based on a much smaller set of inputs (i.e., latent variables). Key elements of the model that distinguish it from other approaches are the constraint that the latent variables be non-negative (e.g., like neural activity), and a lack of other constraints (i.e., need not be uncorrelated, independent, Gaussian-distributed, etc.)
Precise eye-tracking using V1 neural activity published in McFarland et al. (Nat Comm, 2014). [More info]
An algorithm that uses probabilistic models (see NIM) of the stimulus processing of visual cortical neurons to infer an animal's eye position from the spiking activity of a recorded neural population.
- GLM implementation of stimulus- and choice-driven activity in V2 and V3, supporting Quinn et al. (Nat Comm, 2021) [Github]
Neural deep network code currently in development
Check back soon...