Surya Ganguli
Assistant Professor of Applied Physics and, by courtesy, of Neurobiology
Academic Appointments

Assistant Professor, Applied Physics

Assistant Professor (By courtesy), Neurobiology

Member, BioX
Professional Education

Ph.D., UC Berkeley, Theoretical Physics (2004)

M.A., UC Berkeley, Mathematics (2004)

M.Eng., MIT, Electrical Engineering and Computer Science (1998)

B.S., MIT, Mathematics (1998)

B.S., MIT, Physics (1998)

B.S., MIT, Electrical Engineering and Computer Science (1998)
Current Research and Scholarly Interests
Theoretical / computational neuroscience
201516 Courses
 Introduction to Biophysics
APPPHYS 205, BIO 126, BIO 226 (Win)  Theoretical Neuroscience
APPPHYS 293 (Spr) 
Independent Studies (3)
 Directed Reading in Neurosciences
NEPR 299 (Aut, Win, Spr)  Directed Studies in Applied Physics
APPPHYS 290 (Aut, Win, Spr, Sum)  Graduate Research
NEPR 399 (Aut, Win, Spr, Sum)
 Directed Reading in Neurosciences

Prior Year Courses
201415 Courses
 Introduction to Biophysics
APPPHYS 205, BIO 126, BIO 226 (Win)  Theoretical Neuroscience
APPPHYS 293 (Spr)
201314 Courses
 Introduction to Biophysics
APPPHYS 205, BIO 126, BIO 226 (Win)  Theoretical Neuroscience
APPPHYS 293 (Spr)
201213 Courses
 Introduction to Biophysics
APPPHYS 205, BIO 126, BIO 226 (Win)  Theoretical Neuroscience
APPPHYS 293 (Spr)
 Introduction to Biophysics
Stanford Advisees

Postdoctoral Faculty Sponsor
Friedemann Zenke 
Doctoral Dissertation Reader (AC)
Charles de Bourcy 
Doctoral Dissertation CoAdvisor (AC)
Lane McIntosh
All Publications

On simplicity and complexity in the brave new world of largescale neuroscience
CURRENT OPINION IN NEUROBIOLOGY
2015; 32: 148155
View details for DOI 10.1016/j.conb.2015.04.003
View details for Web of Science ID 000356198900020

Environmental Boundaries as an Error Correction Mechanism for Grid Cells
NEURON
2015; 86 (3): 827839
Abstract
Medial entorhinal grid cells fire in periodic, hexagonally patterned locations and are proposed to support pathintegrationbased navigation. The recursive nature of path integration results in accumulating error and, without a corrective mechanism, a breakdown in the calculation of location. The observed longterm stability of grid patterns necessitates that the system either performs highly precise internal path integration or implements an external landmarkbased error correction mechanism. To distinguish these possibilities, we examined grid cells in behaving rodents as they made long trajectories across an open arena. We found that error accumulates relative to time and distance traveled since the animal last encountered a boundary. This error reflects coherent drift in the grid pattern. Further, interactions with boundaries yield directiondependent error correction, suggesting that border cells serve as a neural substrate for error correction. These observations, combined with simulations of an attractor network grid cell model, demonstrate that landmarks are crucial to grid stability.
View details for DOI 10.1016/j.neuron.2015.03.039
View details for Web of Science ID 000354069800021

Role of the site of synaptic competition and the balance of learning forces for Hebbian encoding of probabilistic Markov sequences.
Frontiers in computational neuroscience
2015; 9: 92?
Abstract
The majority of distinct sensory and motor events occur as temporally ordered sequences with rich probabilistic structure. Sequences can be characterized by the probability of transitioning from the current state to upcoming states (forward probability), as well as the probability of having transitioned to the current state from previous states (backward probability). Despite the prevalence of probabilistic sequencing of both sensory and motor events, the Hebbian mechanisms that mold synapses to reflect the statistics of experienced probabilistic sequences are not well understood. Here, we show through analytic calculations and numerical simulations that Hebbian plasticity (correlation, covariance, and STDP) with presynaptic competition can develop synaptic weights equal to the conditional forward transition probabilities present in the input sequence. In contrast, postsynaptic competition can develop synaptic weights proportional to the conditional backward probabilities of the same input sequence. We demonstrate that to stably reflect the conditional probability of a neuron's inputs and outputs, local Hebbian plasticity requires balance between competitive learning forces that promote synaptic differentiation and homogenizing learning forces that promote synaptic stabilization. The balance between these forces dictates a prior over the distribution of learned synaptic weights, strongly influencing both the rate at which structure emerges and the entropy of the final distribution of synaptic weights. Together, these results demonstrate a simple correspondence between the biophysical organization of neurons, the site of synaptic competition, and the temporal flow of information encoded in synaptic weights by Hebbian plasticity while highlighting the utility of balancing learning forces to accurately encode probability distributions, and prior expectations over such probability distributions.
View details for DOI 10.3389/fncom.2015.00092
View details for PubMedID 26257637

Evidence for a causal inverse model in an avian corticobasal ganglia circuit
PROCEEDINGS OF THE NATIONAL ACADEMY OF SCIENCES OF THE UNITED STATES OF AMERICA
2014; 111 (16): 60636068
Abstract
Learning by imitation is fundamental to both communication and social behavior and requires the conversion of complex, nonlinear sensory codes for perception into similarly complex motor codes for generating action. To understand the neural substrates underlying this conversion, we study sensorimotor transformations in songbird cortical output neurons of a basalganglia pathway involved in song learning. Despite the complexity of sensory and motor codes, we find a simple, temporally specific, causal correspondence between them. Sensory neural responses to song playback mirror motorrelated activity recorded during singing, with a temporal offset of roughly 40 ms, in agreement with short feedback loop delays estimated using electrical and auditory stimulation. Such matching of mirroring offsets and loop delays is consistent with a recent Hebbian theory of motor learning and suggests that corticobasal ganglia pathways could support motor control via causal inverse models that can invert the rich correspondence between motor exploration and sensory feedback.
View details for DOI 10.1073/pnas.1317087111
View details for Web of Science ID 000334694000074
View details for PubMedID 24711417
 Fast large scale optimization by unifying stochastic gradient and quasiNewton methods International Conference on Machine Learning (ICML) 2014
 Exact solutions to the nonlinear dynamics of learning in deep neural networks International Conference on Learning Representations (ICLR) 2014
 Identifying and attacking the saddle point problem in highdimensional nonconvex optimization Neural Information Processing Systems (NIPS) 2014

Investigating the role of firingrate normalization and dimensionality reduction in brainmachine interface robustness.
Conference proceedings : ... Annual International Conference of the IEEE Engineering in Medicine and Biology Society. IEEE Engineering in Medicine and Biology Society. Conference
2013; 2013: 293298
Abstract
The intraday robustness of brainmachine interfaces (BMIs) is important to their clinical viability. In particular, BMIs must be robust to intraday perturbations in neuron firing rates, which may arise from several factors including recording loss and external noise. Using a stateoftheart decode algorithm, the Recalibrated Feedback Intention Trained Kalman filter (ReFITKF) [1] we introduce two novel modifications: (1) a normalization of the firing rates, and (2) a reduction of the dimensionality of the data via principal component analysis (PCA). We demonstrate in online studies that a ReFITKF equipped with normalization and PCA (NPCReFITKF) (1) achieves comparable performance to a standard ReFITKF when at least 60% of the neural variance is captured, and (2) is more robust to the undetected loss of channels. We present intuition as to how both modifications may increase the robustness of BMIs, and investigate the contribution of each modification to robustness. These advances, which lead to a decoder achieving stateoftheart performance with improved robustness, are important for the clinical viability of BMI systems.
View details for DOI 10.1109/EMBC.2013.6609495
View details for PubMedID 24109682

A Hebbian learning rule gives rise to mirror neurons and links them to control theoretic inverse models
FRONTIERS IN NEURAL CIRCUITS
2013; 7
Abstract
Mirror neurons are neurons whose responses to the observation of a motor act resemble responses measured during production of that act. Computationally, mirror neurons have been viewed as evidence for the existence of internal inverse models. Such models, rooted within control theory, mapdesired sensory targets onto the motor commands required to generate those targets. To jointly explore both the formation of mirrored responses and their functional contribution to inverse models, we develop a correlationbased theory of interactions between a sensory and a motor area. We show that a simple eligibilityweighted Hebbian learning rule, operating within a sensorimotor loop during motor explorations and stabilized by heterosynaptic competition, naturally gives rise to mirror neurons as well as control theoretic inverse models encoded in the synaptic weights from sensory to motor neurons. Crucially, we find that the correlational structure or stereotypy of the neural code underlying motor explorations determines the nature of the learned inverse model: random motor codes lead to causal inverses that map sensory activity patterns to their motor causes; such inverses are maximally useful, by allowing the imitation of arbitrary sensory target sequences. By contrast, stereotyped motor codes lead to less useful predictive inverses that map sensory activity to future motor actions. Our theory generalizes previous work on inverse models by showing that such models can be learned in a simple Hebbian framework without the need for error signals or backpropagation, and it makes new conceptual connections between the causal nature of inverse models, the statistical structure of motor variability, and the timelag between sensory and motor responses of mirror neurons. Applied to bird song learning, our theory can account for puzzling aspects of the song system, including necessity of sensorimotor gating and selectivity of auditory responses to bird's own song (BOS) stimuli.
View details for DOI 10.3389/fncir.2013.00106
View details for Web of Science ID 000320922000001
View details for PubMedID 23801941

Statistical mechanics of complex neural systems and high dimensional data
JOURNAL OF STATISTICAL MECHANICSTHEORY AND EXPERIMENT
2013
View details for DOI 10.1088/17425468/2013/03/P03014
View details for Web of Science ID 000316056900014
 A memory frontier for complex synapses Neural Information Processing Systems (NIPS) 2013
 Learning hierarchical category structure in deep neural networks Proceedings of the Cognitive Science Society 2013: 12711276
 Vocal learning with inverse models Principles of Neural Coding CRC Press. 2013

Spatial Information Outflow from the Hippocampal Circuit: Distributed Spatial Coding and Phase Precession in the Subiculum
JOURNAL OF NEUROSCIENCE
2012; 32 (34): 1153911558
Abstract
Hippocampal place cells convey spatial information through a combination of spatially selective firing and theta phase precession. The way in which this information influences regions like the subiculum that receive input from the hippocampus remains unclear. The subiculum receives direct inputs from area CA1 of the hippocampus and sends divergent output projections to many other parts of the brain, so we examined the firing patterns of rat subicular neurons. We found a substantial transformation in the subicular code for space from sparse to dense firing rate representations along a proximaldistal anatomical gradient: neurons in the proximal subiculum are more similar to canonical, sparsely firing hippocampal place cells, whereas neurons in the distal subiculum have higher firing rates and more distributed spatial firing patterns. Using information theory, we found that the more distributed spatial representation in the subiculum carries, on average, more information about spatial location and context than the sparse spatial representation in CA1. Remarkably, despite the disparate firing rate properties of subicular neurons, we found that neurons at all proximaldistal locations exhibit robust theta phase precession, with similar spiking oscillation frequencies as neurons in area CA1. Our findings suggest that the subiculum is specialized to compress sparse hippocampal spatial codes into highly informative distributed codes suitable for efficient communication to other brain regions. Moreover, despite this substantial compression, the subiculum maintains finer scale temporal properties that may allow it to participate in oscillatory phase coding and spike timingdependent plasticity in coordination with other regions of the hippocampal circuit.
View details for DOI 10.1523/JNEUROSCI.594211.2012
View details for Web of Science ID 000308140500004
View details for PubMedID 22915100

Compressed Sensing, Sparsity, and Dimensionality in Neuronal Information Processing and Data Analysis
ANNUAL REVIEW OF NEUROSCIENCE, VOL 35
2012; 35: 485508
Abstract
The curse of dimensionality poses severe challenges to both technical and conceptual progress in neuroscience. In particular, it plagues our ability to acquire, process, and model highdimensional data sets. Moreover, neural systems must cope with the challenge of processing data in high dimensions to learn and operate successfully within a complex world. We review recent mathematical advances that provide ways to combat dimensionality in specific situations. These advances shed light on two dual questions in neuroscience. First, how can we as neuroscientists rapidly acquire highdimensional data from the brain and subsequently extract meaningful models from limited amounts of these data? And second, how do brains themselves process information in their intrinsically highdimensional patterns of neural activity as well as learn meaningful, generalizable models of the external world from limited experience?
View details for DOI 10.1146/annurevneuro062111150410
View details for Web of Science ID 000307960400024
View details for PubMedID 22483042
 Shortterm memory in neuronal networks through dynamical compressed sensing Neural Information Processing Systems (NIPS) 2010

Feedforward to the Past: The Relation between Neuronal Connectivity, Amplification, and ShortTerm Memory
NEURON
2009; 61 (4): 499501
Abstract
Two studies in this issue of Neuron challenge widely held assumptions about the role of positive feedback in recurrent neuronal networks. Goldman shows that such feedback is not necessary for memory maintenance in a neural integrator, and Murphy and Miller show that it is not necessary for amplification of orientation patterns in V1. Both suggest that seemingly recurrent networks can be feedforward in disguise.
View details for DOI 10.1016/j.neuron.2009.02.006
View details for Web of Science ID 000263816300004
View details for PubMedID 19249270

Memory traces in dynamical systems
PROCEEDINGS OF THE NATIONAL ACADEMY OF SCIENCES OF THE UNITED STATES OF AMERICA
2008; 105 (48): 1897018975
Abstract
To perform nontrivial, realtime computations on a sensory input stream, biological systems must retain a shortterm memory trace of their recent inputs. It has been proposed that generic highdimensional dynamical systems could retain a memory trace for past inputs in their current state. This raises important questions about the fundamental limits of such memory traces and the properties required of dynamical systems to achieve these limits. We address these issues by applying Fisher information theory to dynamical systems driven by timedependent signals corrupted by noise. We introduce the Fisher Memory Curve (FMC) as a measure of the signaltonoise ratio (SNR) embedded in the dynamical state relative to the input SNR. The integrated FMC indicates the total memory capacity. We apply this theory to linear neuronal networks and show that the capacity of networks with normal connectivity matrices is exactly 1 and that of any network of N neurons is, at most, N. A nonnormal network achieving this bound is subject to stringent design constraints: It must have a hidden feedforward architecture that superlinearly amplifies its input for a time of order N, and the input connectivity must optimally match this architecture. The memory capacity of networks subject to saturating nonlinearities is further limited, and cannot exceed square root N. This limit can be realized by feedforward structures with divergent fan out that distributes the signal across neurons, thereby avoiding saturation. We illustrate the generality of the theory by showing that memory in fluid systems can be sustained by transient nonnormal amplification due to convective instability or the onset of turbulence.
View details for DOI 10.1073/pnas.0804451105
View details for Web of Science ID 000261489100065
View details for PubMedID 19020074

Onedimensional dynamics of attention and decision making in LIP
NEURON
2008; 58 (1): 1525
Abstract
Where we allocate our visual spatial attention depends upon a continual competition between internally generated goals and external distractions. Recently it was shown that single neurons in the macaque lateral intraparietal area (LIP) can predict the amount of time a distractor can shift the locus of spatial attention away from a goal. We propose that this remarkable dynamical correspondence between single neurons and attention can be explained by a network model in which generically highdimensional firingrate vectors rapidly decay to a single mode. We find direct experimental evidence for this model, not only in the original attentional task, but also in a very different task involving perceptual decision making. These results confirm a theoretical prediction that slowly varying activity patterns are proportional to spontaneous activity, pose constraints on models of persistent activity, and suggest a network mechanism for the emergence of robust behavioral timing from heterogeneous neuronal populations.
View details for DOI 10.1016/j.neuron.2008.01.038
View details for Web of Science ID 000254946200006
View details for PubMedID 18400159

Function constrains network architecture and dynamics: A case study on the yeast cell cycle Boolean network
PHYSICAL REVIEW E
2007; 75 (5)
Abstract
We develop a general method to explore how the function performed by a biological network can constrain both its structural and dynamical network properties. This approach is orthogonal to prior studies which examine the functional consequences of a given structural feature, for example a scale free architecture. A key step is to construct an algorithm that allows us to efficiently sample from a maximum entropy distribution on the space of Boolean dynamical networks constrained to perform a specific function, or cascade of gene expression. Such a distribution can act as a "functional null model" to test the significance of any given network feature, and can aid in revealing underlying evolutionary selection pressures on various network properties. Although our methods are general, we illustrate them in an analysis of the yeast cell cycle cascade. This analysis uncovers strong constraints on the architecture of the cell cycle regulatory network as well as significant selection pressures on this network to maintain ordered and convergent dynamics, possibly at the expense of sacrificing robustness to structural perturbations.
View details for DOI 10.1103/PhysRevE.75.051907
View details for Web of Science ID 000246890100094
View details for PubMedID 17677098
 E10 Orbifolds Journal of High Energy Physics 2005; 06 (057)

Twisted six dimensional gauge theories on tori, matrix models, and integrable systems
JOURNAL OF HIGH ENERGY PHYSICS
2004
View details for Web of Science ID 000225279400057

Holographic protection of chronology in universes of the Godel type
PHYSICAL REVIEW D
2003; 67 (10)
View details for DOI 10.1103/PhysRevD.67.106003
View details for Web of Science ID 000183377200098