Bio


Scott is an Assistant Professor of Statistics and, by courtesy, Electrical Engineering and Computer Science at Stanford University. He is also an Institute Scholar in the Wu Tsai Neurosciences Institute and a member of Stanford Bio-X and the Stanford AI Lab. His lab works at the intersection of machine learning and computational neuroscience, developing statistical methods to analyze large scale neural data. Previously, Scott was a postdoctoral fellow with Liam Paninski and David Blei at Columbia University, and he completed his PhD in Computer Science at Harvard University with Ryan Adams and Leslie Valiant. He obtained his undergraduate degree in Electrical and Computer Engineering from Cornell University and spent three years as a software engineer at Microsoft before graduate school.

Academic Appointments


Honors & Awards


  • Sloan Research Fellowship, Sloan Foundation (2022)
  • Next Generation Leader, Allen Institute for Brain Science (2019-2022)
  • Best Paper, International Conference on Artificial Intelligence and Statistics (2017)
  • Postdoctoral Fellow, Simons Collaboration on the Global Brain (2016-2019)
  • Leonard J Savage Award for Outstanding Dissertation in Applied Methodology, International Society for Bayesian Analysis (2016)
  • National Defense Science and Engineering Graduate Fellow, Department of Defense (2011-2014)

Boards, Advisory Committees, Professional Organizations


  • Co-Chair of Projects Team, Neuromatch Academy (2022 - Present)
  • Scientific Advisory Board Member, Herophilus Inc (2019 - Present)

Professional Education


  • PhD, Harvard University, Computer Science (2016)
  • SM, Harvard University, Computer Science (2013)
  • BS, Cornell University, Electrical and Computer Engineering (2008)

2023-24 Courses


Stanford Advisees


All Publications


  • Imaging whole-brain activity to understand behaviour NATURE REVIEWS PHYSICS Lin, A., Witvliet, D., Hernandez-Nunez, L., Linderman, S. W., Samuel, A. T., Venkatachalam, V. 2022
  • Weighing the evidence in sharp-wave ripples. Neuron Linderman, S. W. 2022; 110 (4): 568-570

    Abstract

    In this issue of Neuron, Krause and Drugowitsch (2022) present a novel approach to classifying sharp-wave ripples and find that far more encode spatial trajectories than previously thought. Their method compares a host of state-space models using what Bayesian statisticians call the model evidence.

    View details for DOI 10.1016/j.neuron.2022.01.036

    View details for PubMedID 35176241

  • Statistical neuroscience in the single trial limit. Current opinion in neurobiology Williams, A. H., Linderman, S. W. 2021; 70: 193-205

    Abstract

    Individual neurons often produce highly variable responses over nominally identical trials, reflecting a mixture of intrinsic 'noise' and systematic changes in the animal's cognitive and behavioral state. Disentangling these sources of variability is of great scientific interest in its own right, but it is also increasingly inescapable as neuroscientists aspire to study more complex and naturalistic animal behaviors. In these settings, behavioral actions never repeat themselves exactly and may rarely do so even approximately. Thus, new statistical methods that extract reliable features of neural activity using few, if any, repeated trials are needed. Accurate statistical modeling in this severely trial-limited regime is challenging, but still possible if simplifying structure in neural data can be exploited. We review recent works that have identified different forms of simplifying structure - including shared gain modulations across neural subpopulations, temporal smoothness in neural firing rates, and correlations in responses across behavioral conditions - and exploited them to reveal novel insights into the trial-by-trial operation of neural circuits.

    View details for DOI 10.1016/j.conb.2021.10.008

    View details for PubMedID 34861596

  • Fast deep neural correspondence for tracking and identifying neurons in C. elegans using semi-synthetic training. eLife Yu, X., Creamer, M. S., Randi, F., Sharma, A. K., Linderman, S. W., Leifer, A. M. 2021; 10

    Abstract

    We present an automated method to track and identify neurons in C. elegans, called 'fast Deep Neural Correspondence' or fDNC, based on the transformer network architecture. The model is trained once on empirically derived semi-synthetic data and then predicts neural correspondence across held-out real animals. The same pre-trained model both tracks neurons across time and identifies corresponding neurons across individuals. Performance is evaluated against hand-annotated datasets, including NeuroPAL [1]. Using only position information, the method achieves 79.1% accuracy at tracking neurons within an individual and 64.1% accuracy at identifying neurons across individuals. Accuracy at identifying neurons across individuals is even higher (78.2%) when the model is applied to a dataset published by another group [2]. Accuracy reaches 74.7% on our dataset when using color information from NeuroPAL. Unlike previous methods, fDNC does not require straightening or transforming the animal into a canonical coordinate system. The method is fast and predicts correspondence in 10ms making it suitable for future real-time applications.

    View details for DOI 10.7554/eLife.66410

    View details for PubMedID 34259623

  • Dynamic and reversible remapping of network representations in an unchanging environment. Neuron Low, I. I., Williams, A. H., Campbell, M. G., Linderman, S. W., Giocomo, L. M. 2021

    Abstract

    Neurons in the medial entorhinal cortex alter their firing properties in response to environmental changes. This flexibility in neural coding is hypothesized to support navigation and memory by dividing sensory experience into unique episodes. However, it is unknown how the entorhinal circuit as a whole transitions between different representations when sensory information is not delineated into discrete contexts. Here we describe rapid and reversible transitions between multiple spatial maps of an unchanging task and environment. These remapping events were synchronized across hundreds of neurons, differentially affected navigational cell types, and correlated with changes in running speed. Despite widespread changes in spatial coding, remapping comprised a translation along a single dimension in population-level activity space, enabling simple decoding strategies. These findings provoke reconsideration of how the medial entorhinal cortex dynamically represents space and suggest a remarkable capacity of cortical circuits to rapidly and substantially reorganize their neural representations.

    View details for DOI 10.1016/j.neuron.2021.07.005

    View details for PubMedID 34363753

  • Point process models for sequence detection in high-dimensional neural spike trains. Advances in neural information processing systems Williams, A. H., Degleris, A., Wang, Y., Linderman, S. W. 2020; 33: 14350-14361

    Abstract

    Sparse sequences of neural spikes are posited to underlie aspects of working memory [1], motor production [2], and learning [3, 4]. Discovering these sequences in an unsupervised manner is a longstanding problem in statistical neuroscience [5-7]. Promising recent work [4, 8] utilized a convolutive nonnegative matrix factorization model [9] to tackle this challenge. However, this model requires spike times to be discretized, utilizes a sub-optimal least-squares criterion, and does not provide uncertainty estimates for model predictions or estimated parameters. We address each of these shortcomings by developing a point process model that characterizes fine-scale sequences at the level of individual spikes and represents sequence occurrences as a small number of marked events in continuous time. This ultra-sparse representation of sequence events opens new possibilities for spike train modeling. For example, we introduce learnable time warping parameters to model sequences of varying duration, which have been experimentally observed in neural circuits [10]. We demonstrate these advantages on experimental recordings from songbird higher vocal center and rodent hippocampus.

    View details for PubMedID 35002191

    View details for PubMedCentralID PMC8734964

  • Probabilistic Models of Larval Zebrafish Behavior Reveal Structure on Many Scales. Current biology : CB Johnson, R. E., Linderman, S. n., Panier, T. n., Wee, C. L., Song, E. n., Herrera, K. J., Miller, A. n., Engert, F. n. 2019

    Abstract

    Nervous systems have evolved to combine environmental information with internal state to select and generate adaptive behavioral sequences. To better understand these computations and their implementation in neural circuits, natural behavior must be carefully measured and quantified. Here, we collect high spatial resolution video of single zebrafish larvae swimming in a naturalistic environment and develop models of their action selection across exploration and hunting. Zebrafish larvae swim in punctuated bouts separated by longer periods of rest called interbout intervals. We take advantage of this structure by categorizing bouts into discrete types and representing their behavior as labeled sequences of bout types emitted over time. We then construct probabilistic models-specifically, marked renewal processes-to evaluate how bout types and interbout intervals are selected by the fish as a function of its internal hunger state, behavioral history, and the locations and properties of nearby prey. Finally, we evaluate the models by their predictive likelihood and their ability to generate realistic trajectories of virtual fish swimming through simulated environments. Our simulations capture multiple timescales of structure in larval zebrafish behavior and expose many ways in which hunger state influences their action selection to promote food seeking during hunger and safety during satiety.

    View details for DOI 10.1016/j.cub.2019.11.026

    View details for PubMedID 31866367

  • Scalable Bayesian inference of dendritic voltage via spatiotemporal recurrent state space models Sun, R., Linderman, S. W., Kinsella, I., Paninski, L., Wallach, H., Larochelle, H., Beygelzimer, A., d'Alche-Buc, F., Fox, E., Garnett, R. NEURAL INFORMATION PROCESSING SYSTEMS (NIPS). 2019
  • Mutually Regressive Point Processes Apostolopoulou, I., Linderman, S., Miller, K., Dubrawski, A., Wallach, H., Larochelle, H., Beygelzimer, A., d'Alche-Buc, F., Fox, E., Garnett, R. NEURAL INFORMATION PROCESSING SYSTEMS (NIPS). 2019
  • Poisson-Randomized Gamma Dynamical Systems Schein, A., Linderman, S. W., Zhou, M., Blei, D. M., Wallach, H., Wallach, H., Larochelle, H., Beygelzimer, A., d'Alche-Buc, F., Fox, E., Garnett, R. NEURAL INFORMATION PROCESSING SYSTEMS (NIPS). 2019