Building Marr's bridges

"To understand the relationship between behavior and the brain one has to begin by defining the function, or the computational goal, of a complete behavior. Only then can a neuroscientist determine how the brain achieves that goal" — David Marr

Recent News

March 2017 Co-organizing the Neural Time Series Coding sprint at NYU Center for Data Science. [meeting]

February 2017 Two posters and a talk at Cosyne 2017.

What can we learn about natural vision from color tuning curves? (Poster).
Deep learning approaches towards generating neuronal morphology (Poster, with Roozbeh Farhoodi).
Distinct eye movement strategies differentially reshape visual space (Talk, with Daniel Wood).

December 2016 Poster at NIPS Workshop: Brains and Bits. Deep learning for ventral vision. [poster]

November 2016 Talk at the Undergraduate Neuroscience Seminar, Loyola University of Chicago. Computation: by neurons and for neuroscience.

September 2016 Talk at the Motor Control Group, Brain and Mind Institute, Western Ontario. On the computational complexity of movement sequence chunking.

August 2016 New paper in PLoS One on reward coding in the premotor and motor cortices. [paper] [F1000]

August 2016 Talk at PyData Chicago, Chicago edition of a global data science conference. I will present a recently developed Python package Pyglmnet, an efficient implementation of elastic-net regularized generalized linear models (GLMs). [schedule] [video] [slides] [tutorials]

August 2016 Deep Learning Summer School, 4th edition of an annual week-long lecture series on Deep Learning run by Yoshua Bengio and Alan Courville. ~25% acceptance rate. [meeting]

I'll present synthetic neurophysiology approaches to characterize the functional properties of neurons in visual area V4. We use convnets to predict spiking activity recorded from monkeys freely viewing natural scenes and then show artificial stimuli to these model neurons. [poster]

July 2016 New paper in eLife on neural correlates of motor plans to uncertain targets. [paper] [commentary]

July 2016 New paper in Nature Communications on a theory of movement sequence chunking. [paper]

June 2016 Spykes, a new Python package that makes standard spiking neural data and tuning curve analysis easy and good-looking. [github]

June 2016 New paper in Journal of Neurophysiology on disambiguating the role of frontal eye fields in spatial and feature attention during natural scene search using generalized linear modeling of spike trains. [paper]

May 2016 New paper in Journal of Neurophysiology on expected reward modulation of FEF activity during natural scene search. [paper]

April 2016 Pyglmnet, a new Python package for elastic-net regularized generalized linear models! [github] [documentation]

March 2016 New paper in Neuroimage on decoding natural scene category representations with MEG. [paper]

February 2016 Two workshop talks at Cosyne 2016: On the computational complexity of movement sequence chunking, and The representation of uncertainty in the motor system. [meeting]

Pavan
Photo Credit: Titipat Achakulvisut

Pavan Ramkumar

Departments of Neurobiology and Physical Medicine & Rehabilitation, Northwestern University

Rehabilitation Institute of Chicago, 345 East Superior Street Chicago IL, 60611 Phone: (312) 608-7178

Unprecedented advances in both experimental techniques to monitor brain function, and computational infrastructure and algorithms over the past decade, provide tremendous opportunities to reverse engineer the brain basis of perception and behavior. To transition our efforts from a pre-Galilean age of individualized discoveries into an era of integrated theory development, we need to make two major computational advances. First, we need to develop models of perception and behavior that generate testable neurobiological predictions. Second, we need to analyze data from neuroscience experiments to test these predictions. I work at the intersection of these two computational endeavors.

A computational lens into brain function enables us to formalize perception and behavior as the result of neural computations. This branch of my research brings computational motor control to the brain basis of movement and computer vision models to the brain basis of vision. Computational tools in neuroscience enable us to make sense of large, heterogeneous and noisy datasets. This branch of my research brings machine-learning techniques and open source software development to neural data analysis. Specifically, I collaborate with theorists, data scientists, and experimentalists in both human neuroimaging and primate neurophysiology to study natural scene perception, visual search, motor planning, and movement sequence learning.

The rate of technical advancement in neuroscience will result in an avalanche of data; yet for theforeseeable future, our experiments will undersample both the animal’s behavioral repertoire and the entire variability of its brain state. This combination of data deluge and partial observability, makes the testing of even the most neurobiologically grounded theories of brain function extremely challenging. Advances in deep learning can contribute to both these problems. Modern deep neural networks have as many neurons as a larval zebrafish. They can already match human behavior in object recognition and visually guided reaching movements. Importantly, unlike animal brains, deep neural networks with complex behaviors are fully observable and controllable: we can record their state throughout learning, modify weights, dropout neurons, or rewrite their loss function. Thus, we are confronted with a choice to measure and perturb real brains imprecisely or to measure and perturb deep network models of brain-like behaviors precisely. As a sandbox for sharpening our theory, experiments and data analysis tools, my research program will integrate this approach alongside traditional computational neuroscience work to model and analyze a wide range of behaviors in visual perception and motor control.

Publications

In Preparation

[17] Ramkumar P, Turner RS, Körding KP. Optimization costs underlying movement sequence chunking in basal ganglia.

[16] Ramkumar P, Fernandes HL, Smith MA, Körding KP. Hue tuning during active vision in natural scenes.

In Review

[15] Glaser J, Perich M, Ramkumar P, Miller LE, Körding KP. Dorsal premotor cortex encodes ubiquitous probability distributions.

2016

[14] Ramkumar P, Cooler S, Dekleva BM, Miller EL, Körding KP. Premotor and motor cortices encode reward. PLoS One, 11(8): e0160851. [pdf] [F1000]

[13] Ramkumar P*, Lawlor PN*, Glaser JI, Wood DW, Segraves MA, Körding KP. Feature-based attention and spatial selection in frontal eye fields during natural scene search. Journal of Neurophysiology, EPub Ahead of Print. [paper]

[12] Glaser JI*, Wood DW*, Lawlor PN, Ramkumar P, Körding KP, Segraves MA. Frontal eye field represents expected reward of saccades during natural scene search. Journal of Neurophysiology, EPub Ahead of Print. [pdf]

[11] Dekleva BM, Ramkumar P, Wanda PA, Körding KP, Miller LE. Uncertainty leads to persistent effects on reach representations in dorsal premotor cortex. eLife, 5:e14316. [pdf] [commentary]

[10] Ramkumar P, Acuna DE, Berniker M, Grafton S, Turner RS, Körding KP. Chunking as the result of an efficiency–computation tradeoff. Nature Communications, 7:12176. [pdf]

[9] Ramkumar P, Hansen BC, Pannasch S, Loschky LC. Visual information representation and natural scene categorization are simultaneous across cortex: An MEG study. Neuroimage, 134:295–304. [pdf]

2015

[8] Ramkumar P, Fernandes HL, Körding KP, Segraves MA. 2015. Modeling peripheral visual acuity enables discovery of gaze strategies at multiple time scales during natural scene search. Journal of Vision, 15(3):19. [pdf]

2014

[7] Ramkumar P, Parkkonen L, Hyvärinen A. 2014. Group-level spatial independent component analysis of Fourier envelopes of resting-state MEG data. Neuroimage, 86:480–491. [pdf]

2013

[6] Ramkumar P, Jas M, Pannasch S, Parkkonen L, Hari R. 2013. Feature-specific information processing precedes concerted activation in human visual cortex. Journal of Neuroscience, 33: 7691–7699. [pdf]

[5] Hyvärinen A, Ramkumar P. 2013. Testing independent component patterns by inter-subject or inter-session consistency. Frontiers in Human Neuroscience, 7 (94). [pdf]

2012

[4] Ramkumar P, Parkkonen L, Hari R, Hyvärinen A. 2012. Characterization of neuromagnetic brain rhythms over time scales of minutes using spatial independent component analysis. Human Brain Mapping, 33: 1648–1662. [pdf]

2010

[3] Hyvärinen A, Ramkumar P, Parkkonen L, Hari R. 2010. Independent component analysis of short-time Fourier transforms for spontaneous EEG/MEG analysis. Neuroimage, 49: 257–271. [pdf]

[2] Ramkumar P, Parkkonen L, Hari R. 2010. Oscillatory Response Function: Towards a parametric model of rhythmic brain activity. Human Brain Mapping, 31: 820–834. [pdf]

[1] Malinen S, Vartiainen N, Hlushchuk Y, Koskinen M, Ramkumar P, Forss N, Kalso E, Hari R. 2010. Aberrant spatiotemporal resting-state brain activation in patients with chronic pain. Proceedings of the National Academy of Sciences USA, 107: 6493–6497. [pdf]

Projects

Natural scene psychophysics

Synthetic neurophysiology

Visual search in natural scenes

Frontal eye fields

Rapid scene categorization

Whole-scalp magnetoencephalography

Movement chunking

Basal ganglia

Sensorimotor uncertainty

Premotor cortex

Reward

Premotor and motor cortices

Color perception in natural scenes

Area V4

Software

Pyglmnet: A Python package for elastic-net regularized generalized linear models

Generalized linear models (GLMs) are powerful tools for multivariate regression. They allow us to model different types of target variables: real, categorical, counts, ordinal, etc. using multiple predictors or features. In the era of big data, and high performance computing, GLMs have come to be widely applied across the sciences, economics, business, and finance.

In the era of exploratory data analyses with a large number of predictor variables, it is important to regularize. Regularization prevents overfitting by penalizing the negative log likelihood and can be used to articulate prior knowledge about the parameters in a structured form.

Despite the attractiveness of regularized GLMs, the available tools in the Python data science eco-system are highly fragmented. More specifically, statsmodels provides a wide range of link functions but no regularization. scikit-learn provides elastic net regularization but only for linear models. lightning provides elastic net and group lasso regularization, but only for linear and logistic regression.

Pyglmnet is a response to this fragmentation.

Spykes: A Python package for spike data analysis and visualization

Almost any electrophysiology study of awake behaving animals relies on a battery of standard analyses.

Raster plots and peri-stimulus time histograms aligned to stimuli and behavior provide a snapshot visual description of neural activity. Similarly, tuning curves are the most standard way to characterize how neurons encode stimuli or behavioral preferences. With increasing popularity of population recordings, maximum-likelihood decoders based on tuning models are becoming part of this standard.

Yet, virtually every lab relies on a set of in-house analysis scripts to go from raw data to summaries. We want to change this with Spykes, a collection of Python tools to make visualization and analysis of spiking neural data easy and reproducible.

Collaborators

Graph visualization of collaborators and projects

Below is a visualization of my publications and ongoing projects embedded in my network of 28 co-authors using the force-directed graph schema. Cool-colored nodes represent authors and warm-colored nodes represent projects. Links represent authors that have collaborated together on a project. Nodes representing co-authors are sized according to the number of papers or projects I have in common with them. If you're viewing this on a desktop, moving the mouse over a node gives you the name of the author or the title of the publication or project that it represents.

Me Accepted/ Published Articles
PI co-authors Articles Under Review/ Revision
Grad student or postdoc co-authors Ongoing Projects