Bio


Grace Huckins is a lecturer with the Civic, Liberal, and Global Education program. They earned their PhD in neuroscience from Stanford, where they also completed a PhD minor in philosophy. Their research centers on explanation in neuroscience: they explore approaches for developing brain-based explanation of human experiences and behaviors, and they simultaneously investigate whether or not those explanations are likely to be of value to the general public. Alongside their research and teaching, they also write about neuroscience, health, and artificial intelligence for publications like WIRED, Slate, and MIT Technology Review.

Academic Appointments


  • Lecturer, Stanford Introductory Studies - Civic, Liberal, and Global Education

Honors & Awards


  • Student Award for Excellence in Teaching, Office of Graduate Education, Stanford University (2024)
  • Mass Media Fellowship, American Association for the Advancement of Science (2020)
  • Stanford Interdisciplinary Graduate Fellowship, Stanford University (2020)
  • Rhodes Scholarship, Rhodes Trust (2016)

Professional Education


  • PhD, Stanford University, Neurosciences (2024)
  • MSt, University of Oxford, Women's Studies (2018)
  • MSc, University of Oxford, Neuroscience (2017)
  • BA, Harvard University, Neurobiology and Physics (2016)

2024-25 Courses


All Publications


  • Establishment of Best Practices for Evidence for Prediction: A Review. JAMA psychiatry Poldrack, R. A., Huckins, G., Varoquaux, G. 2019

    Abstract

    Importance: Great interest exists in identifying methods to predict neuropsychiatric disease states and treatment outcomes from high-dimensional data, including neuroimaging and genomics data. The goal of this review is to highlight several potential problems that can arise in studies that aim to establish prediction.Observations: A number of neuroimaging studies have claimed to establish prediction while establishing only correlation, which is an inappropriate use of the statistical meaning of prediction. Statistical associations do not necessarily imply the ability to make predictions in a generalized manner; establishing evidence for prediction thus requires testing of the model on data separate from those used to estimate the model's parameters. This article discusses various measures of predictive performance and the limitations of some commonly used measures, with a focus on the importance of using multiple measures when assessing performance. For classification, the area under the receiver operating characteristic curve is an appropriate measure; for regression analysis, correlation should be avoided, and median absolute error is preferred.Conclusions and Relevance: To ensure accurate estimates of predictive validity, the recommended best practices for predictive modeling include the following: (1) in-sample model fit indices should not be reported as evidence for predictive accuracy, (2) the cross-validation procedure should encompass all operations applied to the data, (3) prediction analyses should not be performed with samples smaller than several hundred observations, (4) multiple measures of prediction accuracy should be examined and reported, (5) the coefficient of determination should be computed using the sums of squares formulation and not the correlation coefficient, and (6) k-fold cross-validation rather than leave-one-out cross-validation should be used.

    View details for DOI 10.1001/jamapsychiatry.2019.3671

    View details for PubMedID 31774490