
Armin Thomas
Postdoctoral Scholar, Psychology
Bio
I am a Ram and Vijay Shriram Data Science Fellow at Stanford Data Science, where I work with Russ Poldrack. My research is located at the intersection of machine learning, neuroscience, and psychology. I am interested in using machine learning techniques to better understand neuroimaging data and human cognitive processes. In my past work, I have explored the cognitive processes underlying simple economic choices and developed computational frameworks that utilize deep learning methods to analyze whole-brain functional Magnetic Resonance Imaging data.
Prior to coming to Stanford, I obtained a PhD in machine learning from Technische Universität Berlin, as well as a MSc in cognitive neuroscience and a BSc in psychology from Freie Universität Berlin. I was also active as a mentor for the Max Planck School of Cognition, and as a researcher for the California Institute of Technology and Max Planck Institute for Human Development.
Professional Education
-
BSc, Freie Universität Berlin, Psychology (2012)
-
MSc, Freie Universität Berlin, Cognitive Neuroscience (2015)
-
PhD, Technische Universität Berlin, Machine learning (2020)
All Publications
-
Benchmarking explanation methods for mental state decoding with deep learning models.
NeuroImage
2023: 120109
Abstract
Deep learning (DL) models find increasing application in mental state decoding, where researchers seek to understand the mapping between mental states (e.g., experiencing anger or joy) and brain activity by identifying those spatial and temporal features of brain activity that allow to accurately identify (i.e., decode) these states. Once a DL model has been trained to accurately decode a set of mental states, neuroimaging researchers often make use of methods from explainable artificial intelligence research to understand the model's learned mappings between mental states and brain activity. Here, we benchmark prominent explanation methods in a mental state decoding analysis of multiple functional Magnetic Resonance Imaging (fMRI) datasets. Our findings demonstrate a gradient between two key characteristics of an explanation in mental state decoding, namely, its faithfulness and its alignment with other empirical evidence on the mapping between brain activity and decoded mental state: explanation methods with high explanation faithfulness, which capture the model's decision process well, generally provide explanations that align less well with other empirical evidence than the explanations of methods with less faithfulness. Based on our findings, we provide guidance for neuroimaging researchers on how to choose an explanation method to gain insight into the mental state decoding decisions of DL models.
View details for DOI 10.1016/j.neuroimage.2023.120109
View details for PubMedID 37059157
-
Interpreting mental state decoding with deep learning models.
Trends in cognitive sciences
2022; 26 (11): 972-986
Abstract
In mental state decoding, researchers aim to identify the set of mental states (e.g., experiencing happiness or fear) that can be reliably identified from the activity patterns of a brain region (or network). Deep learning (DL) models are highly promising for mental state decoding because of their unmatched ability to learn versatile representations of complex data. However, their widespread application in mental state decoding is hindered by their lack of interpretability, difficulties in applying them to small datasets, and in ensuring their reproducibility and robustness. We recommend approaching these challenges by leveraging recent advances in explainable artificial intelligence (XAI) and transfer learning, and also provide recommendations on how to improve the reproducibility and robustness of DL models in mental state decoding.
View details for DOI 10.1016/j.tics.2022.07.003
View details for PubMedID 36223760
-
Gaze-dependent evidence accumulation predicts multi-alternative risky choice behaviour.
PLoS computational biology
2022; 18 (7): e1010283
Abstract
Choices are influenced by gaze allocation during deliberation, so that fixating an alternative longer leads to increased probability of choosing it. Gaze-dependent evidence accumulation provides a parsimonious account of choices, response times and gaze-behaviour in many simple decision scenarios. Here, we test whether this framework can also predict more complex context-dependent patterns of choice in a three-alternative risky choice task, where choices and eye movements were subject to attraction and compromise effects. Choices were best described by a gaze-dependent evidence accumulation model, where subjective values of alternatives are discounted while not fixated. Finally, we performed a systematic search over a large model space, allowing us to evaluate the relative contribution of different forms of gaze-dependence and additional mechanisms previously not considered by gaze-dependent accumulation models. Gaze-dependence remained the most important mechanism, but participants with strong attraction effects employed an additional similarity-dependent inhibition mechanism found in other models of multi-alternative multi-attribute choice.
View details for DOI 10.1371/journal.pcbi.1010283
View details for PubMedID 35793388