Scott W Linderman
Assistant Professor of Statistics
Bio
Scott Linderman, PhD, is an Assistant Professor at Stanford University in the Statistics Department and the Wu Tsai Neurosciences Institute, as well as the Co-Director of the Stanford Center for Neural Data Science. His research focuses on machine learning, computational neuroscience, and the general question of how computational and statistical methods can help to decipher neural computation. His work combines novel methodological development in the areas of state space models, deep generative models, point processes, and approximate Bayesian inference with applied statistical analyses of large-scale neural and behavioral data. Previously, he was a postdoctoral fellow at Columbia University and a graduate student at Harvard University. His work has been recognized with a Savage Award from the International Society for Bayesian Analysis, an AISTATS Best Paper Award, an NSF CAREER Award, and fellowships from the McKnight, Sloan, and Simons Foundations.
Academic Appointments
-
Assistant Professor, Statistics
-
Member, Bio-X
-
Member, Wu Tsai Neurosciences Institute
Honors & Awards
-
NSF CAREER Award, National Science Foundation (2025)
-
McKnight Scholar Award, McKnight Foundation (2023)
-
Sloan Research Fellowship, Sloan Foundation (2022)
-
Next Generation Leader, Allen Institute for Brain Science (2019-2022)
-
Best Paper, International Conference on Artificial Intelligence and Statistics (2017)
-
Postdoctoral Fellow, Simons Collaboration on the Global Brain (2016-2019)
-
Leonard J Savage Award for Outstanding Dissertation in Applied Methodology, International Society for Bayesian Analysis (2016)
-
National Defense Science and Engineering Graduate Fellow, Department of Defense (2011-2014)
Professional Education
-
PhD, Harvard University, Computer Science (2016)
-
SM, Harvard University, Computer Science (2013)
-
BS, Cornell University, Electrical and Computer Engineering (2008)
2025-26 Courses
- Applied Statistics III
STATS 305C (Spr) -
Independent Studies (10)
- Advanced Reading and Research
CS 499 (Aut, Win, Spr, Sum) - Advanced Reading and Research
CS 499P (Aut, Win, Spr, Sum) - Directed Reading in Neurosciences
NEPR 299 (Aut, Win, Spr, Sum) - Directed Studies in Applied Physics
APPPHYS 290 (Aut, Win, Spr, Sum) - Directed Study
BIOE 391 (Aut, Win, Spr, Sum) - Independent Study
STATS 299 (Aut, Win, Spr, Sum) - Out-of-Department Advanced Research Laboratory in Bioengineering
BIOE 191X (Aut, Win, Spr, Sum) - Research
STATS 399 (Aut, Win, Spr, Sum) - Senior Project
CS 191 (Aut, Win) - Writing Intensive Senior Research Project
CS 191W (Aut, Win)
- Advanced Reading and Research
-
Prior Year Courses
2024-25 Courses
- Applied Statistics II
STATS 305B (Win) - Machine Learning Methods for Neural Data Analysis
CS 339N, NBIO 220, STATS 220, STATS 320 (Spr)
2023-24 Courses
- Applied Statistics II
STATS 305B (Win)
2022-23 Courses
- Applied Statistics III
STATS 305C (Spr) - Machine Learning Methods for Neural Data Analysis
CS 339N, NBIO 220, STATS 220, STATS 320 (Win)
- Applied Statistics II
Stanford Advisees
-
Doctoral Dissertation Reader (AC)
Luis Chumpitaz Diaz, Lucas Encarnacion-Rivera, Youssef Faragalla, Elizabeth Jun, Rennie Kendrick, Lavonna Mark, Christopher Minasi -
Orals Chair
Simran Arora, Sabri Eyuboglu -
Postdoctoral Faculty Sponsor
Kelly Buchanan, Aditi Jha, David Zoltowski, Nicolas Zucchet -
Doctoral Dissertation Advisor (AC)
Noah Cowan, Xavier Gonzalez, Amber Hu, Etaash Katiyar, Hannah Lee, Jakub Smékal, Ian Christopher Tanoh -
Master's Program Advisor
Francis Chua, Nicole Segaran -
Doctoral Dissertation Co-Advisor (AC)
Henry Smith -
Doctoral (Program)
Hyun Dong Lee, Alisa Levin
All Publications
-
Spontaneous behavior is a succession of self-directed tasks.
Neuron
2026
Abstract
Animals achieve high-level goals by sequencing low-level actions. This transformation is best understood in structured tasks that impose a specific mapping between goals and actions. However, it remains unclear whether spontaneous behavior is similarly organized in the service of identifiable goals or how it might be supported by brain regions responsible for goal-oriented behavior, such as the prefrontal cortex (PFC). Here, we show that low-level actions in freely exploring mice are hierarchically organized into seconds-long behavioral states that correspond to task-like programs of behavior. These persistent states structure neural activity in the PFC, which preferentially encodes the identity of states relative to low-level behavioral features and shapes which states are expressed in a given context. These findings argue that spontaneous behavior is organized as a succession of self-directed tasks and identify principles of neural control that are common to structured tasks and spontaneous exploration.
View details for DOI 10.1016/j.neuron.2025.11.021
View details for PubMedID 41610841
-
The neural computation of affective internal states in the hypothalamus: A dynamical systems perspective.
Neuron
2025; 113 (23): 3887-3907
Abstract
Internal affective states accompany evolutionarily ancient survival behaviors such as mating, aggression, and predator defense and may contribute to emotional feelings in humans. In this perspective, we introduce a dynamical system framework for thinking about such states. We synthesize evidence from recent studies suggesting that key state features, such as their intensity and duration, may be encoded by approximate line attractor manifolds in the hypothalamus. Evidence for these attractors arises from unsupervised data-driven dynamical system modeling of high-dimensional calcium imaging data from genetically identified cell populations in freely behaving mice. Dissection of the fit dynamical models and closed-loop modeling with experimental perturbations raise new questions regarding circuit- and cellular-level mechanisms of attractor implementation. These findings challenge prevailing views of hypothalamic behavioral control and afford a new avenue to study the emergence of slow state-encoding neural dynamics across scales, from single neurons to recurrent networks and neuromodulatory signaling.
View details for DOI 10.1016/j.neuron.2025.11.003
View details for PubMedID 41344293
-
Life-long behavioral screen reveals an architecture of vertebrate aging.
bioRxiv : the preprint server for biology
2025
Abstract
Mapping behavior of individual vertebrate animals across lifespan is challenging, but if achieved, could provide an unprecedented view into the life-long process of aging. We created the first platform for high-resolution continuous behavioral tracking of a vertebrate animal across natural lifespan from adolescence to death-here, of the African killifish. This behavioral screen revealed that animals follow distinct individual aging trajectories. The behaviors of long-lived animals differed markedly from those of short-lived animals, even relatively early in life, and were linked to organ-specific transcriptomic shifts. Machine learning models accurately predicted age and even forecasted an individual's future lifespan, given only behavior at a young age. Finally, we found that animals progressed through adulthood in a sequence of stable and stereotyped behavioral stages with abrupt transitions suggesting a novel structure for the architecture of vertebrate aging.
View details for DOI 10.1101/2025.11.21.688112
View details for PubMedID 41332731
View details for PubMedCentralID PMC12667948
-
Competitive integration of time and reward explains value-sensitive foraging decisions and frontal cortex ramping dynamics.
Neuron
2025
Abstract
Patch foraging is a ubiquitous decision-making process in which animals decide when to abandon a resource patch of diminishing value to pursue an alternative. We developed a virtual foraging task in which mouse behavior varied systematically with patch value. Behavior could be explained by models integrating time and rewards antagonistically, scaled by a slowly varying latent patience state. Describing a mechanism rather than a normative prescription, these models quantitatively captured deviations from optimal foraging theory. Neuropixels recordings throughout frontal areas revealed distributed ramping signals, concentrated in the frontal cortex, from which multiple integrator models' decision variables could be decoded equally well. These signals reflected key aspects of decision models: they ramped gradually, responded oppositely to time and rewards, were sensitive to patch richness, and retained memory of reward history. Together, these results identify integration via frontal cortex ramping dynamics as a candidate mechanism for solving patch-foraging problems.
View details for DOI 10.1016/j.neuron.2025.07.008
View details for PubMedID 40780211
-
Concerted actions of distinct serotonin neurons orchestrate female pup care behavior.
Research square
2025
Abstract
In many mammalian species, female behavior towards infant conspecifics changes across reproductive stages. Sexually naive females interact minimally or aggressively with infants, whereas the same animals exhibit extensive care behavior, even towards unrelated infants, after parturition1-6. Here, we discovered that two distinct sets of serotonin neurons collectively mediate this dramatic transition in maternal behavior-serotonin neurons projecting to the medial preoptic area (mPOA) promote pup care in mothers, whereas those projecting to the bed nucleus of the stria terminalis (BNST) suppress pup interaction in virgin female mice. Disrupting serotonin synthesis in either of these subpopulations or stimulating either subpopulation is sufficient to toggle pup-directed behavior between that displayed by virgin females and that of lactating mothers. In virgin female mice, the first pup interaction triggers an increase in serotonin release in BNST but a decrease in mPOA. In mothers, serotonin activity becomes greatly elevated in mPOA during pup interactions. Acute interruption of serotonin signaling locally in either mPOA or BNST disrupts the stage-dependent switch in pup care. Together, these results highlight how functionally distinct serotonin subpopulations orchestrate social behaviors appropriate for a given reproductive state, and suggest a circuit logic for how a neuromodulator coordinates adaptive behavioral changes across life stages.
View details for DOI 10.21203/rs.3.rs-7134286/v1
View details for PubMedID 40799751
-
Concerted actions of distinct serotonin neurons orchestrate female pup care behavior.
bioRxiv : the preprint server for biology
2025
Abstract
In many mammalian species, female behavior towards infant conspecifics changes across reproductive stages. Sexually naïve females interact minimally or aggressively with infants, whereas the same animals exhibit extensive care behavior, even towards unrelated infants, after parturition1-6. Here, we discovered that two distinct sets of serotonin neurons collectively mediate this dramatic transition in maternal behavior-serotonin neurons projecting to the medial preoptic area (mPOA) promote pup care in mothers, whereas those projecting to the bed nucleus of the stria terminalis (BNST) suppress pup interaction in virgin female mice. Disrupting serotonin synthesis in either of these subpopulations or stimulating either subpopulation is sufficient to toggle pup-directed behavior between that displayed by virgin females and that of lactating mothers. In virgin female mice, the first pup interaction triggers an increase in serotonin release in BNST but a decrease in mPOA. In mothers, serotonin activity becomes greatly elevated in mPOA during pup interactions. Acute interruption of serotonin signaling locally in either mPOA or BNST disrupts the stage-dependent switch in pup care. Together, these results highlight how functionally distinct serotonin subpopulations orchestrate social behaviors appropriate for a given reproductive state, and suggest a circuit logic for how a neuromodulator coordinates adaptive behavioral changes across life stages.
View details for DOI 10.1101/2025.07.31.667987
View details for PubMedID 40766669
View details for PubMedCentralID PMC12324472
-
A Bayesian hierarchical model of trial-to-trial fluctuations in decision criterion.
PLoS computational biology
2025; 21 (7): e1013291
Abstract
Classical decision models assume that the parameters giving rise to choice behavior are stable, yet emerging research suggests these parameters may fluctuate over time. Such fluctuations, observed in neural activity and behavioral strategies, have significant implications for understanding decision-making processes. However, empirical studies on fluctuating human decision-making strategies have been limited due to the extensive data requirements for estimating these fluctuations. Here, we introduce hMFC (Hierarchical Model for Fluctuations in Criterion), a Bayesian framework designed to estimate slow fluctuations in the decision criterion from limited data. We first showcase the importance of considering fluctuations in decision criterion: incorrectly assuming a stable criterion gives rise to apparent history effects and underestimates perceptual sensitivity. We then present a hierarchical estimation procedure capable of reliably recovering the underlying state of the fluctuating decision criterion with as few as 500 trials per participant, offering a robust tool for researchers with typical human datasets. Critically, hMFC does not only accurately recover the state of the underlying decision criterion, it also effectively deals with the confounds caused by criterion fluctuations. Lastly, we provide code and a comprehensive demo to enable widespread application of hMFC in decision-making research.
View details for DOI 10.1371/journal.pcbi.1013291
View details for PubMedID 40729408
-
Computation-through-Dynamics Benchmark: Simulated datasets and quality metrics for dynamical models of neural activity.
bioRxiv : the preprint server for biology
2025
Abstract
A primary goal of systems neuroscience is to discover how ensembles of neurons transform inputs into goal-directed behavior, a process known as neural computation. A powerful framework for understanding neural computation uses neural dynamics - the rules that describe the temporal evolution of neural activity - to explain how goal-directed input-output transformations occur. As dynamical rules are not directly observable, we need computational models that can infer neural dynamics from recorded neural activity. We typically validate such models using synthetic datasets with known ground-truth dynamics, but unfortunately existing synthetic datasets don't reflect fundamental features of neural computation and are thus poor proxies for neural systems. Further, the field lacks validated metrics for quantifying the accuracy of the dynamics inferred by models. The Computation-through-Dynamics Benchmark (CtDB) fills these critical gaps by providing: 1) synthetic datasets that reflect computational properties of biological neural circuits, 2) interpretable metrics for quantifying model performance, and 3) a standardized pipeline for training and evaluating models with or without known external inputs. In this manuscript, we demonstrate how CtDB can help guide the development, tuning, and troubleshooting of neural dynamics models. In summary, CtDB provides a critical platform for model developers to better understand and characterize neural computation through the lens of dynamics.
View details for DOI 10.1101/2025.02.07.637062
View details for PubMedID 39975240
View details for PubMedCentralID PMC11839132
-
Modeling Latent Neural Dynamics with Gaussian Process Switching Linear Dynamical Systems.
ArXiv
2025
Abstract
Understanding how the collective activity of neural populations relates to computation and ultimately behavior is a key goal in neuroscience. To this end, statistical methods which describe high-dimensional neural time series in terms of low-dimensional latent dynamics have played a fundamental role in characterizing neural systems. Yet, what constitutes a successful method involves two opposing criteria: (1) methods should be expressive enough to capture complex nonlinear dynamics, and (2) they should maintain a notion of interpretability often only warranted by simpler linear models. In this paper, we develop an approach that balances these two objectives: the Gaussian Process Switching Linear Dynamical System (gpSLDS). Our method builds on previous work modeling the latent state evolution via a stochastic differential equation whose nonlinear dynamics are described by a Gaussian process (GP-SDEs). We propose a novel kernel function which enforces smoothly interpolated locally linear dynamics, and therefore expresses flexible - yet interpretable - dynamics akin to those of recurrent switching linear dynamical systems (rSLDS). Our approach resolves key limitations of the rSLDS such as artifactual oscillations in dynamics near discrete state boundaries, while also providing posterior uncertainty estimates of the dynamics. To fit our models, we leverage a modified learning objective which improves the estimation accuracy of kernel hyperparameters compared to previous GP-SDE fitting approaches. We apply our method to synthetic data and data recorded in two neuroscience experiments and demonstrate favorable performance in comparison to the rSLDS.
View details for PubMedID 39876935
View details for PubMedCentralID PMC11774443
-
Structured flexibility in recurrent neural networks via neuromodulation.
Advances in neural information processing systems
2024; 2024: 1954-1972
Abstract
A core aim in theoretical and systems neuroscience is to develop models that help us better understand biological intelligence. Such models range broadly in both complexity and biological plausibility. One widely-adopted example is task-optimized recurrent neural networks (RNNs), which have been used to generate hypotheses about how the brain's neural dynamics may organize to accomplish tasks. However, task-optimized RNNs typically have a fixed weight matrix representing the synaptic connectivity between neurons. From decades of neuroscience research, we know that synaptic weights are constantly changing, controlled in part by chemicals such as neuromodulators. In this work we explore the computational implications of synaptic gain scaling, a form of neuromodulation, using task-optimized low-rank RNNs. In our neuromodulated RNN (NM-RNN) model, a neuromodulatory subnetwork outputs a low-dimensional neuromodulatory signal that dynamically scales the low-rank recurrent weights of an output-generating RNN. In empirical experiments, we find that the structured flexibility in the NM-RNN allows it to both train and generalize with a higher degree of accuracy than low-rank RNNs on a set of canonical tasks. Additionally, via theoretical analyses we show how neuromodulatory gain scaling endows networks with gating mechanisms commonly found in artificial RNNs. We end by analyzing the low-rank dynamics of trained NM-RNNs, to show how task computations are distributed.
View details for DOI 10.52202/079017-0062
View details for PubMedID 41200132
View details for PubMedCentralID PMC12588093
-
Spatiotemporal Clustering with Neyman-Scott Processes via Connections to Bayesian Nonparametric Mixture Models.
Journal of the American Statistical Association
2024; 119 (547): 2382-2395
Abstract
Neyman-Scott processes (NSPs) are point process models that generate clusters of points in time or space. They are natural models for a wide range of phenomena, ranging from neural spike trains to document streams. The clustering property is achieved via a doubly stochastic formulation: first, a set of latent events is drawn from a Poisson process; then, each latent event generates a set of observed data points according to another Poisson process. This construction is similar to Bayesian nonparametric mixture models like the Dirichlet process mixture model (DPMM) in that the number of latent events (i.e. clusters) is a random variable, but the point process formulation makes the NSP especially well suited to modeling spatiotemporal data. While many specialized algorithms have been developed for DPMMs, comparatively fewer works have focused on inference in NSPs. Here, we present novel connections between NSPs and DPMMs, with the key link being a third class of Bayesian mixture models called mixture of finite mixture models (MFMMs). Leveraging this connection, we adapt the standard collapsed Gibbs sampling algorithm for DPMMs to enable scalable Bayesian inference on NSP models. We demonstrate the potential of Neyman-Scott processes on a variety of applications including sequence detection in neural spike trains and event detection in document streams.
View details for DOI 10.1080/01621459.2023.2257896
View details for PubMedID 39308788
View details for PubMedCentralID PMC11412414
-
A Bayesian Hierarchical Model of Trial-To-Trial Fluctuations in Decision Criterion.
bioRxiv : the preprint server for biology
2024
Abstract
Classical decision models assume that the parameters giving rise to choice behavior are stable, yet emerging research suggests these parameters may fluctuate over time. Such fluctuations, observed in neural activity and behavioral strategies, have significant implications for understanding decision-making processes. However, empirical studies on fluctuating human decision-making strategies have been limited due to the extensive data requirements for estimating these fluctuations. Here, we introduce hMFC (Hierarchical Model for Fluctuations in Criterion), a Bayesian framework designed to estimate slow fluctuations in the decision criterion from limited data. We first showcase the importance of considering fluctuations in decision criterion: incorrectly assuming a stable criterion gives rise to apparent history effects and underestimates perceptual sensitivity. We then present a hierarchical estimation procedure capable of reliably recovering the underlying state of the fluctuating decision criterion with as few as 500 trials per participant, offering a robust tool for researchers with typical human datasets. Critically, hMFC does not only accurately recover the state of the underlying decision criterion, it also effectively deals with the confounds caused by criterion fluctuations. Lastly, we provide code and a comprehensive demo at www.github.com/robinvloeberghs/hMFC to enable widespread application of hMFC in decision-making research.
View details for DOI 10.1101/2024.07.30.605869
View details for PubMedID 39211219
View details for PubMedCentralID PMC11361103
-
A line attractor encoding a persistent internal state requires neuropeptide signaling.
Cell
2024
Abstract
Internal states drive survival behaviors, but their neural implementation is poorly understood. Recently, we identified a line attractor in the ventromedial hypothalamus (VMH) that represents a state of aggressiveness. Line attractors can be implemented by recurrent connectivity or neuromodulatory signaling, but evidence for the latter is scant. Here, we demonstrate that neuropeptidergic signaling is necessary for line attractor dynamics in this system by using cell-type-specific CRISPR-Cas9-based gene editing combined with single-cell calcium imaging. Co-disruption of receptors for oxytocin and vasopressin in adult VMH Esr1+ neurons that control aggression diminished attack, reduced persistent neural activity, and eliminated line attractor dynamics while only slightly reducing overall neural activity and sex- or behavior-specific tuning. These data identify a requisite role for neuropeptidergic signaling in implementing a behaviorally relevant line attractor in mammals. Our approach should facilitate mechanistic studies in neuroscience that bridge different levels of biological function and abstraction.
View details for DOI 10.1016/j.cell.2024.08.015
View details for PubMedID 39191257
-
Encoding of female mating dynamics by a hypothalamic line attractor.
Nature
2024
Abstract
Females exhibit complex, dynamic behaviors during mating with variable sexual receptivity depending on hormonal status1-4. However, how their brains encode the dynamics of mating and receptivity remains largely unknown. The ventromedial hypothalamus, ventro-lateral subdivision contains estrogen receptor type 1-positive neurons that control mating receptivity in female mice5,6. Unsupervised dynamical systems analysis of calcium imaging data from these neurons during mating uncovered a dimension with slow ramping activity, generating a line attractor in neural state space. Neural perturbations in behaving females demonstrated relaxation of population activity back into the attractor. During mating population activity integrated male cues to ramp up along this attractor, peaking just before ejaculation. Activity in the attractor dimension was positively correlated with the degree of receptivity. Longitudinal imaging revealed that attractor dynamics appear and disappear across the estrus cycle and are hormone-dependent. These observations suggest that a hypothalamic line attractor encodes a persistent, escalating state of female sexual arousal or drive during mating. They also demonstrate that attractors can be reversibly modulated by hormonal status, on a timescale of days.
View details for DOI 10.1038/s41586-024-07916-w
View details for PubMedID 39142338
-
Causal evidence of a line attractor encoding an affective state.
Nature
2024
Abstract
Line attractors are emergent population dynamics hypothesized to encode continuous variables such as head direction and internal states1-4. In mammals, direct evidence of neural implementation of a line attractor has been hindered by the challenge of targeting perturbations to specific neurons within contributing ensembles2,3. Linear dynamical systems modeling has revealed that neurons in the hypothalamus exhibit approximate line attractor dynamics in male mice during aggressive encounters5. We have previously hypothesized that these dynamics may encode the variable intensity of an aggressive internal motive state. Here, we report that these neurons also showed line attractor dynamics in head-fixed mice observing aggression6. We identified and perturbed line attractor-contributing neurons using 2-photon calcium imaging and holographic optogenetic perturbations. On-manifold perturbations yielded integration and persistent activity that drove the system along the line attractor, while transient off-manifold perturbations were followed by rapid relaxation back into the attractor. Furthermore, single-cell stimulation and imaging revealed selective functional connectivity among attractor-contributing neurons. Intriguingly, individual differences among mice in line attractor stability were correlated with the degree of functional connectivity among attractor neurons. Mechanistic RNN modelling indicated that dense subnetwork connectivity and slow neurotransmission7 best recapitulate our empirical findings. Our work bridges circuit and manifold levels3, providing causal evidence of continuous attractor dynamics encoding an affective internal state in the mammalian hypothalamus.
View details for DOI 10.1038/s41586-024-07915-x
View details for PubMedID 39142337
-
Structured flexibility in recurrent neural networks via neuromodulation.
bioRxiv : the preprint server for biology
2024
Abstract
The goal of theoretical neuroscience is to develop models that help us better understand biological intelligence. Such models range broadly in complexity and biological detail. For example, task-optimized recurrent neural networks (RNNs) have generated hypotheses about how the brain may perform various computations, but these models typically assume a fixed weight matrix representing the synaptic connectivity between neurons. From decades of neuroscience research, we know that synaptic weights are constantly changing, controlled in part by chemicals such as neuromodulators. In this work we explore the computational implications of synaptic gain scaling, a form of neuromodulation, using task-optimized low-rank RNNs. In our neuromodulated RNN (NM-RNN) model, a neuromodulatory subnetwork outputs a low-dimensional neuromodulatory signal that dynamically scales the low-rank recurrent weights of an output-generating RNN. In empirical experiments, we find that the structured flexibility in the NM-RNN allows it to both train and generalize with a higher degree of accuracy than low-rank RNNs on a set of canonical tasks. Additionally, via theoretical analyses we show how neuromodulatory gain scaling endows networks with gating mechanisms commonly found in artificial RNNs. We end by analyzing the low-rank dynamics of trai ned NM-RNNs, to show how task computations are distributed.
View details for DOI 10.1101/2024.07.26.605315
View details for PubMedID 39091788
-
Explaining dopamine through prediction errors and beyond.
Nature neuroscience
2024
Abstract
The most influential account of phasic dopamine holds that it reports reward prediction errors (RPEs). The RPE-based interpretation of dopamine signaling is, in its original form, probably too simple and fails to explain all the properties of phasic dopamine observed in behaving animals. This Perspective helps to resolve some of the conflicting interpretations of dopamine that currently exist in the literature. We focus on the following three empirical challenges to the RPE theory of dopamine: why does dopamine (1) ramp up as animals approach rewards, (2) respond to sensory and motor features and (3) influence action selection? We argue that the prediction error concept, once it has been suitably modified and generalized based on an analysis of each computational problem, answers each challenge. Nonetheless, there are a number of additional empirical findings that appear to demand fundamentally different theoretical explanations beyond encoding RPE. Therefore, looking forward, we discuss the prospects for a unifying theory that respects the diversity of dopamine signaling and function as well as the complex circuitry that both underlies and responds to dopaminergic transmission.
View details for DOI 10.1038/s41593-024-01705-4
View details for PubMedID 39054370
-
Keypoint-MoSeq: parsing behavior by linking point tracking to pose dynamics.
Nature methods
2024; 21 (7): 1329-1339
Abstract
Keypoint tracking algorithms can flexibly quantify animal movement from videos obtained in a wide variety of settings. However, it remains unclear how to parse continuous keypoint data into discrete actions. This challenge is particularly acute because keypoint data are susceptible to high-frequency jitter that clustering algorithms can mistake for transitions between actions. Here we present keypoint-MoSeq, a machine learning-based platform for identifying behavioral modules ('syllables') from keypoint data without human supervision. Keypoint-MoSeq uses a generative model to distinguish keypoint noise from behavior, enabling it to identify syllables whose boundaries correspond to natural sub-second discontinuities in pose dynamics. Keypoint-MoSeq outperforms commonly used alternative clustering methods at identifying these transitions, at capturing correlations between neural activity and behavior and at classifying either solitary or social behaviors in accordance with human annotations. Keypoint-MoSeq also works in multiple species and generalizes beyond the syllable timescale, identifying fast sniff-aligned movements in mice and a spectrum of oscillatory behaviors in fruit flies. Keypoint-MoSeq, therefore, renders accessible the modular structure of behavior through standard video recordings.
View details for DOI 10.1038/s41592-024-02318-2
View details for PubMedID 38997595
View details for PubMedCentralID 9007740
-
Intrinsic Dynamics and Neural Implementation of a Hypothalamic Line Attractor Encoding an Internal Behavioral State.
bioRxiv : the preprint server for biology
2024
Abstract
Line attractors are emergent population dynamics hypothesized to encode continuous variables such as head direction and internal states. In mammals, direct evidence of neural implementation of a line attractor has been hindered by the challenge of targeting perturbations to specific neurons within contributing ensembles. Estrogen receptor type 1 (Esr1)-expressing neurons in the ventrolateral subdivision of the ventromedial hypothalamus (VMHvl) show line attractor dynamics in male mice during fighting. We hypothesized that these dynamics may encode continuous variation in the intensity of an internal aggressive state. Here, we report that these neurons also show line attractor dynamics in head-fixed mice observing aggression. We exploit this finding to identify and perturb line attractor-contributing neurons using 2-photon calcium imaging and holographic optogenetic perturbations. On-manifold perturbations demonstrate that integration and persistent activity are intrinsic properties of these neurons which drive the system along the line attractor, while transient off-manifold perturbations reveal rapid relaxation back into the attractor. Furthermore, stimulation and imaging reveal selective functional connectivity among attractor-contributing neurons. Intriguingly, individual differences among mice in line attractor stability were correlated with the degree of functional connectivity among contributing neurons. Mechanistic modelling indicates that dense subnetwork connectivity and slow neurotransmission are required to explain our empirical findings. Our work bridges circuit and manifold paradigms, shedding light on the intrinsic and operational dynamics of a behaviorally relevant mammalian line attractor.
View details for DOI 10.1101/2024.05.21.595051
View details for PubMedID 38826298
-
Spatiotemporal Clustering with Neyman-Scott Processes via Connections to Bayesian Nonparametric Mixture Models
JOURNAL OF THE AMERICAN STATISTICAL ASSOCIATION
2023
View details for DOI 10.1080/01621459.2023.2257896
View details for Web of Science ID 001123516000001
-
Emergence of belief-like representations through reinforcement learning.
PLoS computational biology
2023; 19 (9): e1011067
Abstract
To behave adaptively, animals must learn to predict future reward, or value. To do this, animals are thought to learn reward predictions using reinforcement learning. However, in contrast to classical models, animals must learn to estimate value using only incomplete state information. Previous work suggests that animals estimate value in partially observable tasks by first forming "beliefs"-optimal Bayesian estimates of the hidden states in the task. Although this is one way to solve the problem of partial observability, it is not the only way, nor is it the most computationally scalable solution in complex, real-world environments. Here we show that a recurrent neural network (RNN) can learn to estimate value directly from observations, generating reward prediction errors that resemble those observed experimentally, without any explicit objective of estimating beliefs. We integrate statistical, functional, and dynamical systems perspectives on beliefs to show that the RNN's learned representation encodes belief information, but only when the RNN's capacity is sufficiently large. These results illustrate how animals can estimate value in tasks without explicitly estimating beliefs, yielding a representation useful for systems with limited capacity.
View details for DOI 10.1371/journal.pcbi.1011067
View details for PubMedID 37695776
-
Competitive integration of time and reward explains value-sensitive foraging decisions and frontal cortex ramping dynamics.
bioRxiv : the preprint server for biology
2023
Abstract
The ability to make advantageous decisions is critical for animals to ensure their survival. Patch foraging is a natural decision-making process in which animals decide when to leave a patch of depleting resources to search for a new one. To study the algorithmic and neural basis of patch foraging behavior in a controlled laboratory setting, we developed a virtual foraging task for head-fixed mice. Mouse behavior could be explained by ramp-to-threshold models integrating time and rewards antagonistically. Accurate behavioral modeling required inclusion of a slowly varying "patience" variable, which modulated sensitivity to time. To investigate the neural basis of this decision-making process, we performed dense electrophysiological recordings with Neuropixels probes broadly throughout frontal cortex and underlying subcortical areas. We found that decision variables from the reward integrator model were represented in neural activity, most robustly in frontal cortical areas. Regression modeling followed by unsupervised clustering identified a subset of neurons with ramping activity. These neurons' firing rates ramped up gradually in single trials over long time scales (up to tens of seconds), were inhibited by rewards, and were better described as being generated by a continuous ramp rather than a discrete stepping process. Together, these results identify reward integration via a continuous ramping process in frontal cortex as a likely candidate for the mechanism by which the mammalian brain solves patch foraging problems.
View details for DOI 10.1101/2023.09.05.556267
View details for PubMedID 37732217
View details for PubMedCentralID PMC10508756
-
Keypoint-MoSeq: parsing behavior by linking point tracking to pose dynamics.
bioRxiv : the preprint server for biology
2023
Abstract
Keypoint tracking algorithms have revolutionized the analysis of animal behavior, enabling investigators to flexibly quantify behavioral dynamics from conventional video recordings obtained in a wide variety of settings. However, it remains unclear how to parse continuous keypoint data into the modules out of which behavior is organized. This challenge is particularly acute because keypoint data is susceptible to high frequency jitter that clustering algorithms can mistake for transitions between behavioral modules. Here we present keypoint-MoSeq, a machine learning-based platform for identifying behavioral modules ("syllables") from keypoint data without human supervision. Keypoint-MoSeq uses a generative model to distinguish keypoint noise from behavior, enabling it to effectively identify syllables whose boundaries correspond to natural sub-second discontinuities inherent to mouse behavior. Keypoint-MoSeq outperforms commonly used alternative clustering methods at identifying these transitions, at capturing correlations between neural activity and behavior, and at classifying either solitary or social behaviors in accordance with human annotations. Keypoint-MoSeq therefore renders behavioral syllables and grammar accessible to the many researchers who use standard video to capture animal behavior.
View details for DOI 10.1101/2023.03.16.532307
View details for PubMedID 36993589
View details for PubMedCentralID PMC10055085
-
Emergence of belief-like representations through reinforcement learning.
bioRxiv : the preprint server for biology
2023
Abstract
To behave adaptively, animals must learn to predict future reward, or value. To do this, animals are thought to learn reward predictions using reinforcement learning. However, in contrast to classical models, animals must learn to estimate value using only incomplete state information. Previous work suggests that animals estimate value in partially observable tasks by first forming "beliefs"-optimal Bayesian estimates of the hidden states in the task. Although this is one way to solve the problem of partial observability, it is not the only way, nor is it the most computationally scalable solution in complex, real-world environments. Here we show that a recurrent neural network (RNN) can learn to estimate value directly from observations, generating reward prediction errors that resemble those observed experimentally, without any explicit objective of estimating beliefs. We integrate statistical, functional, and dynamical systems perspectives on beliefs to show that the RNN's learned representation encodes belief information, but only when the RNN's capacity is sufficiently large. These results illustrate how animals can estimate value in tasks without explicitly estimating beliefs, yielding a representation useful for systems with limited capacity.Natural environments are full of uncertainty. For example, just because my fridge had food in it yesterday does not mean it will have food today. Despite such uncertainty, animals can estimate which states and actions are the most valuable. Previous work suggests that animals estimate value using a brain area called the basal ganglia, using a process resembling a reinforcement learning algorithm called TD learning. However, traditional reinforcement learning algorithms cannot accurately estimate value in environments with state uncertainty (e.g., when my fridge's contents are unknown). One way around this problem is if agents form "beliefs," a probabilistic estimate of how likely each state is, given any observations so far. However, estimating beliefs is a demanding process that may not be possible for animals in more complex environments. Here we show that an artificial recurrent neural network (RNN) trained with TD learning can estimate value from observations, without explicitly estimating beliefs. The trained RNN's error signals resembled the neural activity of dopamine neurons measured during the same task. Importantly, the RNN's activity resembled beliefs, but only when the RNN had enough capacity. This work illustrates how animals could estimate value in uncertain environments without needing to first form beliefs, which may be useful in environments where computing the true beliefs is too costly.
View details for DOI 10.1101/2023.04.04.535512
View details for PubMedID 37066383
View details for PubMedCentralID PMC10104054
-
Spontaneous behaviour is structured by reinforcement without explicit reward.
Nature
2023
Abstract
Spontaneous animal behaviour is built from action modules that are concatenated by the brain into sequences1,2. However, the neural mechanisms that guide the composition of naturalistic, self-motivated behaviour remain unknown. Here we show that dopamine systematically fluctuates in the dorsolateral striatum (DLS) as mice spontaneously express sub-second behavioural modules, despite the absence of task structure, sensory cues or exogenous reward. Photometric recordings and calibrated closed-loop optogenetic manipulations during open field behaviour demonstrate that DLS dopamine fluctuations increase sequence variation over seconds, reinforce the use of associated behavioural modules over minutes, and modulate the vigour with which modules are expressed, without directly influencing movement initiation or moment-to-moment kinematics. Although the reinforcing effects of optogenetic DLS dopamine manipulations vary across behavioural modules and individual mice, these differences are well predicted by observed variation in the relationships between endogenous dopamine and module use. Consistent with the possibility that DLS dopamine fluctuations act as a teaching signal, mice build sequences during exploration as if to maximize dopamine. Together, these findings suggest a model in which the same circuits and computations that govern action choices in structured tasks have a key role in sculpting the content of unconstrained, high-dimensional, spontaneous behaviour.
View details for DOI 10.1038/s41586-022-05611-2
View details for PubMedID 36653449
-
An approximate line attractor in the hypothalamus encodes an aggressive state.
Cell
2023; 186 (1): 178
Abstract
The hypothalamus regulates innate social behaviors, including mating and aggression. These behaviors can be evoked by optogenetic stimulation of specific neuronal subpopulations within MPOA and VMHvl, respectively. Here, we perform dynamical systems modeling of population neuronal activity in these nuclei during social behaviors. In VMHvl, unsupervised analysis identified a dominant dimension of neural activity with a large time constant (>50 s), generating an approximate line attractor in neural state space. Progression of the neural trajectory along this attractor was correlated with an escalation of agonistic behavior, suggesting that it may encode a scalable state of aggressiveness. Consistent with this, individual differences in the magnitude of the integration dimension time constant were strongly correlated with differences in aggressiveness. In contrast, approximate line attractors were not observed in MPOA during mating; instead, neurons with fast dynamics were tuned to specific actions. Thus, different hypothalamic nuclei employ distinct neural population codes to represent similar social behaviors.
View details for DOI 10.1016/j.cell.2022.11.027
View details for PubMedID 36608653
-
NAS-<i>X</i>: Neural Adaptive Smoothing via Twisting
edited by Oh, A., Neumann, T., Globerson, A., Saenko, K., Hardt, M., Levine, S.
NEURAL INFORMATION PROCESSING SYSTEMS (NIPS). 2023
View details for Web of Science ID 001229751902029
-
Switching Autoregressive Low-rank Tensor Models
edited by Oh, A., Neumann, T., Globerson, A., Saenko, K., Hardt, M., Levine, S.
NEURAL INFORMATION PROCESSING SYSTEMS (NIPS). 2023
View details for Web of Science ID 001229826601019
-
Convolutional State Space Models for Long-Range Spatiotemporal Modeling
edited by Oh, A., Neumann, T., Globerson, A., Saenko, K., Hardt, M., Levine, S.
NEURAL INFORMATION PROCESSING SYSTEMS (NIPS). 2023
View details for Web of Science ID 001220600006002
-
Imaging whole-brain activity to understand behavior.
Nature reviews. Physics
2022; 4 (5): 292-305
Abstract
The brain evolved to produce behaviors that help an animal inhabit the natural world. During natural behaviors, the brain is engaged in many levels of activity from the detection of sensory inputs to decision-making to motor planning and execution. To date, most brain studies have focused on small numbers of neurons that interact in limited circuits. This allows analyzing individual computations or steps of neural processing. During behavior, however, brain activity must integrate multiple circuits in different brain regions. The activities of different brain regions are not isolated, but may be contingent on one another. Coordinated and concurrent activity within and across brain areas is organized by (1) sensory information from the environment, (2) the animal's internal behavioral state, and (3) recurrent networks of synaptic and non-synaptic connectivity. Whole-brain recording with cellular resolution provides a new opportunity to dissect the neural basis of behavior, but whole-brain activity is also mutually contingent on behavior itself. This is especially true for natural behaviors like navigation, mating, or hunting, which require dynamic interaction between the animal, its environment, and other animals. In such behaviors, the sensory experience of an unrestrained animal is actively shaped by its movements and decisions. Many of the signaling and feedback pathways that an animal uses to guide behavior only occur in freely moving animals. Recent technological advances have enabled whole-brain recording in small behaving animals including nematodes, flies, and zebrafish. These whole-brain experiments capture neural activity with cellular resolution spanning sensory, decision-making, and motor circuits, and thereby demand new theoretical approaches that integrate brain dynamics with behavioral dynamics. Here, we review the experimental and theoretical methods that are being employed to understand animal behavior and whole-brain activity, and the opportunities for physics to contribute to this emerging field of systems neuroscience.
View details for DOI 10.1038/s42254-022-00430-w
View details for PubMedID 37409001
View details for PubMedCentralID PMC10320740
-
Mice exhibit stochastic and efficient action switching during probabilistic decision making.
Proceedings of the National Academy of Sciences of the United States of America
2022; 119 (15): e2113961119
Abstract
SignificanceTo obtain rewards in changing and uncertain environments, animals must adapt their behavior. We found that mouse choice and trial-to-trial switching behavior in a dynamic and probabilistic two-choice task could be modeled by equivalent theoretical, algorithmic, and descriptive models. These models capture components of evidence accumulation, choice history bias, and stochasticity in mouse behavior. Furthermore, they reveal that mice adapt their behavior in different environmental contexts by modulating their level of stickiness to their previous choice. Despite deviating from the behavior of a theoretically ideal observer, the empirical models achieve comparable levels of near-maximal reward. These results make predictions to guide interrogation of the neural mechanisms underlying flexible decision-making strategies.
View details for DOI 10.1073/pnas.2113961119
View details for PubMedID 35385355
-
Imaging whole-brain activity to understand behaviour
NATURE REVIEWS PHYSICS
2022
View details for DOI 10.1038/s42254-022-00430-w
View details for Web of Science ID 000766058600001
-
Weighing the evidence in sharp-wave ripples.
Neuron
2022; 110 (4): 568-570
Abstract
In this issue of Neuron, Krause and Drugowitsch (2022) present a novel approach to classifying sharp-wave ripples and find that far more encode spatial trajectories than previously thought. Their method compares a host of state-space models using what Bayesian statisticians call the model evidence.
View details for DOI 10.1016/j.neuron.2022.01.036
View details for PubMedID 35176241
-
Generalized Shape Metrics on Neural Representations.
Advances in neural information processing systems
2021; 34: 4738-4750
Abstract
Understanding the operation of biological and artificial networks remains a difficult and important challenge. To identify general principles, researchers are increasingly interested in surveying large collections of networks that are trained on, or biologically adapted to, similar tasks. A standardized set of analysis tools is now needed to identify how network-level covariates-such as architecture, anatomical brain region, and model organism-impact neural representations (hidden layer activations). Here, we provide a rigorous foundation for these analyses by defining a broad family of metric spaces that quantify representational dissimilarity. Using this framework, we modify existing representational similarity measures based on canonical correlation analysis and centered kernel alignment to satisfy the triangle inequality, formulate a novel metric that respects the inductive biases in convolutional layers, and identify approximate Euclidean embeddings that enable network representations to be incorporated into essentially any off-the-shelf machine learning method. We demonstrate these methods on large-scale datasets from biology (Allen Institute Brain Observatory) and deep learning (NAS-Bench-101). In doing so, we identify relationships between neural representations that are interpretable in terms of anatomical features and model performance.
View details for PubMedID 38170102
View details for PubMedCentralID PMC10760997
-
Statistical neuroscience in the single trial limit.
Current opinion in neurobiology
2021; 70: 193-205
Abstract
Individual neurons often produce highly variable responses over nominally identical trials, reflecting a mixture of intrinsic 'noise' and systematic changes in the animal's cognitive and behavioral state. Disentangling these sources of variability is of great scientific interest in its own right, but it is also increasingly inescapable as neuroscientists aspire to study more complex and naturalistic animal behaviors. In these settings, behavioral actions never repeat themselves exactly and may rarely do so even approximately. Thus, new statistical methods that extract reliable features of neural activity using few, if any, repeated trials are needed. Accurate statistical modeling in this severely trial-limited regime is challenging, but still possible if simplifying structure in neural data can be exploited. We review recent works that have identified different forms of simplifying structure - including shared gain modulations across neural subpopulations, temporal smoothness in neural firing rates, and correlations in responses across behavioral conditions - and exploited them to reveal novel insights into the trial-by-trial operation of neural circuits.
View details for DOI 10.1016/j.conb.2021.10.008
View details for PubMedID 34861596
-
Fast deep neural correspondence for tracking and identifying neurons in C. elegans using semi-synthetic training.
eLife
2021; 10
Abstract
We present an automated method to track and identify neurons in C. elegans, called 'fast Deep Neural Correspondence' or fDNC, based on the transformer network architecture. The model is trained once on empirically derived semi-synthetic data and then predicts neural correspondence across held-out real animals. The same pre-trained model both tracks neurons across time and identifies corresponding neurons across individuals. Performance is evaluated against hand-annotated datasets, including NeuroPAL [1]. Using only position information, the method achieves 79.1% accuracy at tracking neurons within an individual and 64.1% accuracy at identifying neurons across individuals. Accuracy at identifying neurons across individuals is even higher (78.2%) when the model is applied to a dataset published by another group [2]. Accuracy reaches 74.7% on our dataset when using color information from NeuroPAL. Unlike previous methods, fDNC does not require straightening or transforming the animal into a canonical coordinate system. The method is fast and predicts correspondence in 10ms making it suitable for future real-time applications.
View details for DOI 10.7554/eLife.66410
View details for PubMedID 34259623
-
Animal pose estimation from video data with a hierarchical von Mises-Fisher-Gaussian model
edited by Banerjee, A., Fukumizu, K.
MICROTOME PUBLISHING. 2021
View details for Web of Science ID 000659893803038
-
Dynamic and reversible remapping of network representations in an unchanging environment.
Neuron
2021
Abstract
Neurons in the medial entorhinal cortex alter their firing properties in response to environmental changes. This flexibility in neural coding is hypothesized to support navigation and memory by dividing sensory experience into unique episodes. However, it is unknown how the entorhinal circuit as a whole transitions between different representations when sensory information is not delineated into discrete contexts. Here we describe rapid and reversible transitions between multiple spatial maps of an unchanging task and environment. These remapping events were synchronized across hundreds of neurons, differentially affected navigational cell types, and correlated with changes in running speed. Despite widespread changes in spatial coding, remapping comprised a translation along a single dimension in population-level activity space, enabling simple decoding strategies. These findings provoke reconsideration of how the medial entorhinal cortex dynamically represents space and suggest a remarkable capacity of cortical circuits to rapidly and substantially reorganize their neural representations.
View details for DOI 10.1016/j.neuron.2021.07.005
View details for PubMedID 34363753
-
Point process models for sequence detection in high-dimensional neural spike trains.
Advances in neural information processing systems
2020; 33: 14350-14361
Abstract
Sparse sequences of neural spikes are posited to underlie aspects of working memory [1], motor production [2], and learning [3, 4]. Discovering these sequences in an unsupervised manner is a longstanding problem in statistical neuroscience [5-7]. Promising recent work [4, 8] utilized a convolutive nonnegative matrix factorization model [9] to tackle this challenge. However, this model requires spike times to be discretized, utilizes a sub-optimal least-squares criterion, and does not provide uncertainty estimates for model predictions or estimated parameters. We address each of these shortcomings by developing a point process model that characterizes fine-scale sequences at the level of individual spikes and represents sequence occurrences as a small number of marked events in continuous time. This ultra-sparse representation of sequence events opens new possibilities for spike train modeling. For example, we introduce learnable time warping parameters to model sequences of varying duration, which have been experimentally observed in neural circuits [10]. We demonstrate these advantages on experimental recordings from songbird higher vocal center and rodent hippocampus.
View details for PubMedID 35002191
View details for PubMedCentralID PMC8734964
-
Probabilistic Models of Larval Zebrafish Behavior Reveal Structure on Many Scales.
Current biology : CB
2019
Abstract
Nervous systems have evolved to combine environmental information with internal state to select and generate adaptive behavioral sequences. To better understand these computations and their implementation in neural circuits, natural behavior must be carefully measured and quantified. Here, we collect high spatial resolution video of single zebrafish larvae swimming in a naturalistic environment and develop models of their action selection across exploration and hunting. Zebrafish larvae swim in punctuated bouts separated by longer periods of rest called interbout intervals. We take advantage of this structure by categorizing bouts into discrete types and representing their behavior as labeled sequences of bout types emitted over time. We then construct probabilistic models-specifically, marked renewal processes-to evaluate how bout types and interbout intervals are selected by the fish as a function of its internal hunger state, behavioral history, and the locations and properties of nearby prey. Finally, we evaluate the models by their predictive likelihood and their ability to generate realistic trajectories of virtual fish swimming through simulated environments. Our simulations capture multiple timescales of structure in larval zebrafish behavior and expose many ways in which hunger state influences their action selection to promote food seeking during hunger and safety during satiety.
View details for DOI 10.1016/j.cub.2019.11.026
View details for PubMedID 31866367
-
BehaveNet: nonlinear embedding and Bayesian neural decoding of behavioral videos
edited by Wallach, H., Larochelle, H., Beygelzimer, A., d'Alche-Buc, F., Fox, E., Garnett, R.
NEURAL INFORMATION PROCESSING SYSTEMS (NIPS). 2019
View details for Web of Science ID 000535866907037
-
Scalable Bayesian inference of dendritic voltage via spatiotemporal recurrent state space models
edited by Wallach, H., Larochelle, H., Beygelzimer, A., d'Alche-Buc, F., Fox, E., Garnett, R.
NEURAL INFORMATION PROCESSING SYSTEMS (NIPS). 2019
View details for Web of Science ID 000535866901076
-
Mutually Regressive Point Processes
edited by Wallach, H., Larochelle, H., Beygelzimer, A., d'Alche-Buc, F., Fox, E., Garnett, R.
NEURAL INFORMATION PROCESSING SYSTEMS (NIPS). 2019
View details for Web of Science ID 000534424305015
-
Poisson-Randomized Gamma Dynamical Systems
edited by Wallach, H., Larochelle, H., Beygelzimer, A., d'Alche-Buc, F., Fox, E., Garnett, R.
NEURAL INFORMATION PROCESSING SYSTEMS (NIPS). 2019
View details for Web of Science ID 000534424300071
-
Using computational theory to constrain statistical models of neural data
CURRENT OPINION IN NEUROBIOLOGY
2017; 46: 14-24
Abstract
Computational neuroscience is, to first order, dominated by two approaches: the 'bottom-up' approach, which searches for statistical patterns in large-scale neural recordings, and the 'top-down' approach, which begins with a theory of computation and considers plausible neural implementations. While this division is not clear-cut, we argue that these approaches should be much more intimately linked. From a Bayesian perspective, computational theories provide constrained prior distributions on neural data-albeit highly sophisticated ones. By connecting theory to observation via a probabilistic model, we provide the link necessary to test, evaluate, and revise our theories in a data-driven and statistically rigorous fashion. This review highlights examples of this theory-driven pipeline for neural data analysis in recent literature and illustrates it with a worked example based on the temporal difference learning model of dopamine.
View details for DOI 10.1016/j.conb.2017.06.004
View details for Web of Science ID 000416196400004
View details for PubMedID 28732273
View details for PubMedCentralID PMC5660645
https://orcid.org/0000-0002-3878-9073