Scott W Linderman
Assistant Professor of Statistics and, by courtesy, of Computer Science and of Electrical Engineering
Bio
Scott is an Assistant Professor of Statistics and, by courtesy, Electrical Engineering and Computer Science at Stanford University. He is also an Institute Scholar in the Wu Tsai Neurosciences Institute and a member of Stanford Bio-X and the Stanford AI Lab. His lab works at the intersection of machine learning and computational neuroscience, developing statistical methods to analyze large scale neural data. Previously, Scott was a postdoctoral fellow with Liam Paninski and David Blei at Columbia University, and he completed his PhD in Computer Science at Harvard University with Ryan Adams and Leslie Valiant. He obtained his undergraduate degree in Electrical and Computer Engineering from Cornell University and spent three years as a software engineer at Microsoft before graduate school.
Academic Appointments
-
Assistant Professor, Statistics
-
Assistant Professor (By courtesy), Computer Science
-
Member, Bio-X
-
Member, Wu Tsai Neurosciences Institute
Honors & Awards
-
Sloan Research Fellowship, Sloan Foundation (2022)
-
Next Generation Leader, Allen Institute for Brain Science (2019-2022)
-
Best Paper, International Conference on Artificial Intelligence and Statistics (2017)
-
Postdoctoral Fellow, Simons Collaboration on the Global Brain (2016-2019)
-
Leonard J Savage Award for Outstanding Dissertation in Applied Methodology, International Society for Bayesian Analysis (2016)
-
National Defense Science and Engineering Graduate Fellow, Department of Defense (2011-2014)
Boards, Advisory Committees, Professional Organizations
-
Co-Chair of Projects Team, Neuromatch Academy (2022 - Present)
-
Scientific Advisory Board Member, Herophilus Inc (2019 - Present)
Professional Education
-
PhD, Harvard University, Computer Science (2016)
-
SM, Harvard University, Computer Science (2013)
-
BS, Cornell University, Electrical and Computer Engineering (2008)
2024-25 Courses
- Applied Statistics II
STATS 305B (Win) - Machine Learning Methods for Neural Data Analysis
NBIO 220, STATS 220, STATS 320 (Spr) -
Independent Studies (10)
- Advanced Reading and Research
CS 499 (Aut, Win, Spr) - Advanced Reading and Research
CS 499P (Aut, Win, Spr) - Directed Reading in Neurosciences
NEPR 299 (Aut, Win, Spr, Sum) - Directed Studies in Applied Physics
APPPHYS 290 (Aut, Win, Spr) - Directed Study
BIOE 391 (Aut, Win, Spr) - Independent Study
STATS 299 (Aut, Win, Spr) - Out-of-Department Advanced Research Laboratory in Bioengineering
BIOE 191X (Aut, Win, Spr) - Research
STATS 399 (Aut, Win, Spr) - Senior Honors Thesis
MATH 197 (Win, Spr) - Writing Intensive Senior Research Project
CS 191W (Aut)
- Advanced Reading and Research
-
Prior Year Courses
2023-24 Courses
- Applied Statistics II
STATS 305B (Win)
2022-23 Courses
- Applied Statistics III
STATS 305C (Spr) - Machine Learning Methods for Neural Data Analysis
CS 339N, NBIO 220, STATS 220, STATS 320 (Win)
2021-22 Courses
- Applied Statistics III
STATS 305C (Spr)
- Applied Statistics II
Stanford Advisees
-
Doctoral Dissertation Reader (AC)
Ari Beller, Lucas Encarnacion-Rivera, Youssef Faragalla, Elizabeth Jun, Rennie Kendrick, John Kochalka, Lavonna Mark, Josh Melander, Christopher Minasi, Linnie Warton -
Postdoctoral Faculty Sponsor
Dan Biderman, Kelly Buchanan, Aditi Jha, David Zoltowski -
Doctoral Dissertation Advisor (AC)
Julia Costacurta, Xavier Gonzalez, Amber Hu, Matthew MacKay, Jimmy Smith, Jakub Smékal, Ian Christopher Tanoh, Libby Zhang, Yixiu Zhao -
Master's Program Advisor
Derek Askaryar, Nicole Segaran -
Doctoral (Program)
Hyun Dong Lee, Alisa Levin
All Publications
-
Imaging whole-brain activity to understand behaviour
NATURE REVIEWS PHYSICS
2022
View details for DOI 10.1038/s42254-022-00430-w
View details for Web of Science ID 000766058600001
-
Weighing the evidence in sharp-wave ripples.
Neuron
2022; 110 (4): 568-570
Abstract
In this issue of Neuron, Krause and Drugowitsch (2022) present a novel approach to classifying sharp-wave ripples and find that far more encode spatial trajectories than previously thought. Their method compares a host of state-space models using what Bayesian statisticians call the model evidence.
View details for DOI 10.1016/j.neuron.2022.01.036
View details for PubMedID 35176241
-
Statistical neuroscience in the single trial limit.
Current opinion in neurobiology
2021; 70: 193-205
Abstract
Individual neurons often produce highly variable responses over nominally identical trials, reflecting a mixture of intrinsic 'noise' and systematic changes in the animal's cognitive and behavioral state. Disentangling these sources of variability is of great scientific interest in its own right, but it is also increasingly inescapable as neuroscientists aspire to study more complex and naturalistic animal behaviors. In these settings, behavioral actions never repeat themselves exactly and may rarely do so even approximately. Thus, new statistical methods that extract reliable features of neural activity using few, if any, repeated trials are needed. Accurate statistical modeling in this severely trial-limited regime is challenging, but still possible if simplifying structure in neural data can be exploited. We review recent works that have identified different forms of simplifying structure - including shared gain modulations across neural subpopulations, temporal smoothness in neural firing rates, and correlations in responses across behavioral conditions - and exploited them to reveal novel insights into the trial-by-trial operation of neural circuits.
View details for DOI 10.1016/j.conb.2021.10.008
View details for PubMedID 34861596
-
Fast deep neural correspondence for tracking and identifying neurons in C. elegans using semi-synthetic training.
eLife
2021; 10
Abstract
We present an automated method to track and identify neurons in C. elegans, called 'fast Deep Neural Correspondence' or fDNC, based on the transformer network architecture. The model is trained once on empirically derived semi-synthetic data and then predicts neural correspondence across held-out real animals. The same pre-trained model both tracks neurons across time and identifies corresponding neurons across individuals. Performance is evaluated against hand-annotated datasets, including NeuroPAL [1]. Using only position information, the method achieves 79.1% accuracy at tracking neurons within an individual and 64.1% accuracy at identifying neurons across individuals. Accuracy at identifying neurons across individuals is even higher (78.2%) when the model is applied to a dataset published by another group [2]. Accuracy reaches 74.7% on our dataset when using color information from NeuroPAL. Unlike previous methods, fDNC does not require straightening or transforming the animal into a canonical coordinate system. The method is fast and predicts correspondence in 10ms making it suitable for future real-time applications.
View details for DOI 10.7554/eLife.66410
View details for PubMedID 34259623
-
Animal pose estimation from video data with a hierarchical von Mises-Fisher-Gaussian model
MICROTOME PUBLISHING. 2021
View details for Web of Science ID 000659893803038
-
Dynamic and reversible remapping of network representations in an unchanging environment.
Neuron
2021
Abstract
Neurons in the medial entorhinal cortex alter their firing properties in response to environmental changes. This flexibility in neural coding is hypothesized to support navigation and memory by dividing sensory experience into unique episodes. However, it is unknown how the entorhinal circuit as a whole transitions between different representations when sensory information is not delineated into discrete contexts. Here we describe rapid and reversible transitions between multiple spatial maps of an unchanging task and environment. These remapping events were synchronized across hundreds of neurons, differentially affected navigational cell types, and correlated with changes in running speed. Despite widespread changes in spatial coding, remapping comprised a translation along a single dimension in population-level activity space, enabling simple decoding strategies. These findings provoke reconsideration of how the medial entorhinal cortex dynamically represents space and suggest a remarkable capacity of cortical circuits to rapidly and substantially reorganize their neural representations.
View details for DOI 10.1016/j.neuron.2021.07.005
View details for PubMedID 34363753
-
Point process models for sequence detection in high-dimensional neural spike trains.
Advances in neural information processing systems
2020; 33: 14350-14361
Abstract
Sparse sequences of neural spikes are posited to underlie aspects of working memory [1], motor production [2], and learning [3, 4]. Discovering these sequences in an unsupervised manner is a longstanding problem in statistical neuroscience [5-7]. Promising recent work [4, 8] utilized a convolutive nonnegative matrix factorization model [9] to tackle this challenge. However, this model requires spike times to be discretized, utilizes a sub-optimal least-squares criterion, and does not provide uncertainty estimates for model predictions or estimated parameters. We address each of these shortcomings by developing a point process model that characterizes fine-scale sequences at the level of individual spikes and represents sequence occurrences as a small number of marked events in continuous time. This ultra-sparse representation of sequence events opens new possibilities for spike train modeling. For example, we introduce learnable time warping parameters to model sequences of varying duration, which have been experimentally observed in neural circuits [10]. We demonstrate these advantages on experimental recordings from songbird higher vocal center and rodent hippocampus.
View details for PubMedID 35002191
View details for PubMedCentralID PMC8734964
-
Probabilistic Models of Larval Zebrafish Behavior Reveal Structure on Many Scales.
Current biology : CB
2019
Abstract
Nervous systems have evolved to combine environmental information with internal state to select and generate adaptive behavioral sequences. To better understand these computations and their implementation in neural circuits, natural behavior must be carefully measured and quantified. Here, we collect high spatial resolution video of single zebrafish larvae swimming in a naturalistic environment and develop models of their action selection across exploration and hunting. Zebrafish larvae swim in punctuated bouts separated by longer periods of rest called interbout intervals. We take advantage of this structure by categorizing bouts into discrete types and representing their behavior as labeled sequences of bout types emitted over time. We then construct probabilistic models-specifically, marked renewal processes-to evaluate how bout types and interbout intervals are selected by the fish as a function of its internal hunger state, behavioral history, and the locations and properties of nearby prey. Finally, we evaluate the models by their predictive likelihood and their ability to generate realistic trajectories of virtual fish swimming through simulated environments. Our simulations capture multiple timescales of structure in larval zebrafish behavior and expose many ways in which hunger state influences their action selection to promote food seeking during hunger and safety during satiety.
View details for DOI 10.1016/j.cub.2019.11.026
View details for PubMedID 31866367
-
BehaveNet: nonlinear embedding and Bayesian neural decoding of behavioral videos
NEURAL INFORMATION PROCESSING SYSTEMS (NIPS). 2019
View details for Web of Science ID 000535866907037
-
Scalable Bayesian inference of dendritic voltage via spatiotemporal recurrent state space models
NEURAL INFORMATION PROCESSING SYSTEMS (NIPS). 2019
View details for Web of Science ID 000535866901076
-
Mutually Regressive Point Processes
NEURAL INFORMATION PROCESSING SYSTEMS (NIPS). 2019
View details for Web of Science ID 000534424305015
-
Poisson-Randomized Gamma Dynamical Systems
NEURAL INFORMATION PROCESSING SYSTEMS (NIPS). 2019
View details for Web of Science ID 000534424300071