Bio


Grace Huckins is a lecturer with the Civic, Liberal, and Global Education program. They earned their PhD in neuroscience from Stanford, where they also completed a PhD minor in philosophy. Their research centers on explanation in neuroscience: they explore approaches for developing brain-based explanation of human experiences and behaviors, and they simultaneously investigate whether or not those explanations are likely to be of value to the general public. Alongside their research and teaching, they also write about neuroscience, health, and artificial intelligence for publications like WIRED, Slate, and MIT Technology Review.

Academic Appointments


  • Lecturer, Stanford Introductory Studies - Civic, Liberal, and Global Education

Honors & Awards


  • Student Award for Excellence in Teaching, Office of Graduate Education, Stanford University (2024)
  • Mass Media Fellowship, American Association for the Advancement of Science (2020)
  • Stanford Interdisciplinary Graduate Fellowship, Stanford University (2020)
  • Rhodes Scholarship, Rhodes Trust (2016)

Professional Education


  • PhD, Stanford University, Neurosciences (2024)
  • MSt, University of Oxford, Women's Studies (2018)
  • MSc, University of Oxford, Neuroscience (2017)
  • BA, Harvard University, Neurobiology and Physics (2016)

All Publications


  • Generative dynamical models for classification of rsfMRI data NETWORK NEUROSCIENCE Huckins, G., Poldrack, R. A. 2024; 8 (4): 1613-1633
  • Generative dynamical models for classification of rsfMRI data. Network neuroscience (Cambridge, Mass.) Huckins, G., Poldrack, R. A. 2024; 8 (4): 1613-1633

    Abstract

    The growing availability of large-scale neuroimaging datasets and user-friendly machine learning tools has led to a recent surge in studies that use fMRI data to predict psychological or behavioral variables. Many such studies classify fMRI data on the basis of static features, but fewer try to leverage brain dynamics for classification. Here, we pilot a generative, dynamical approach for classifying resting-state fMRI (rsfMRI) data. By fitting separate hidden Markov models to the classes in our training data and assigning class labels to test data based on their likelihood under those models, we are able to take advantage of dynamical patterns in the data without confronting the statistical limitations of some other dynamical approaches. Moreover, we demonstrate that hidden Markov models are able to successfully perform within-subject classification on the MyConnectome dataset solely on the basis of transition probabilities among their hidden states. On the other hand, individual Human Connectome Project subjects cannot be identified on the basis of hidden state transition probabilities alone-although a vector autoregressive model does achieve high performance. These results demonstrate a dynamical classification approach for rsfMRI data that shows promising performance, particularly for within-subject classification, and has the potential to afford greater interpretability than other approaches.

    View details for DOI 10.1162/netn_a_00412

    View details for PubMedID 39735493

    View details for PubMedCentralID PMC11675094

  • Establishment of Best Practices for Evidence for Prediction: A Review. JAMA psychiatry Poldrack, R. A., Huckins, G., Varoquaux, G. 2019

    Abstract

    Importance: Great interest exists in identifying methods to predict neuropsychiatric disease states and treatment outcomes from high-dimensional data, including neuroimaging and genomics data. The goal of this review is to highlight several potential problems that can arise in studies that aim to establish prediction.Observations: A number of neuroimaging studies have claimed to establish prediction while establishing only correlation, which is an inappropriate use of the statistical meaning of prediction. Statistical associations do not necessarily imply the ability to make predictions in a generalized manner; establishing evidence for prediction thus requires testing of the model on data separate from those used to estimate the model's parameters. This article discusses various measures of predictive performance and the limitations of some commonly used measures, with a focus on the importance of using multiple measures when assessing performance. For classification, the area under the receiver operating characteristic curve is an appropriate measure; for regression analysis, correlation should be avoided, and median absolute error is preferred.Conclusions and Relevance: To ensure accurate estimates of predictive validity, the recommended best practices for predictive modeling include the following: (1) in-sample model fit indices should not be reported as evidence for predictive accuracy, (2) the cross-validation procedure should encompass all operations applied to the data, (3) prediction analyses should not be performed with samples smaller than several hundred observations, (4) multiple measures of prediction accuracy should be examined and reported, (5) the coefficient of determination should be computed using the sums of squares formulation and not the correlation coefficient, and (6) k-fold cross-validation rather than leave-one-out cross-validation should be used.

    View details for DOI 10.1001/jamapsychiatry.2019.3671

    View details for PubMedID 31774490