Bio


I am a PhD student in the Department of Computer Science. I am interested in how the mammalian brain is functionally and spatially organized over the course of evolution, gestation, and development, and what mechanistic constraints, inductive biases, and environmental statistics shape that organization. To generate hypotheses about so, I use deep artificial neural networks to study how their optimization under different constraints map to their match, both functionally and spatially, to the corresponding biological system being modeled. Prior to starting his PhD, I earned a master's degree in computer science from Stanford University and a bachelor's degree in computer science from the University of California, San Diego. In my free time, I like to be outdoors, read and write, do art, and think about circles.

Education & Certifications


  • Master of Science, Stanford University, Computer Science (2025)
  • Bachelor of Science, University of California, San Diego, Computer Science (2023)

Research Interests


  • Brain and Learning Sciences
  • Data Sciences
  • Psychology

Current Research and Scholarly Interests


My research interests lie in developing neuroconnectionist mechanistic models of the brain that deepen our understanding of neural computation and representations. I aim to explore how physiological and anatomical constraints shape cortical topography and, in turn, scaffold development. I am particularly intrigued by observing certain behaviors emerge from mechanistic models, even when the model was not optimized to do so.

All Publications


  • Confounder-Free Continual Learning via Recursive Feature Normalization. Proceedings of machine learning research Shah, Y., Gonzalez, C., Abbasi, M. H., Zhao, Q., Pohl, K. M., Adeli, E. 2025; 267: 54112-54142

    Abstract

    Confounders are extraneous variables that affect both the input and the target, resulting in spurious correlations and biased predictions. There are recent advances in dealing with or removing confounders in traditional models, such as metadata normalization (MDN), where the distribution of the learned features is adjusted based on the study confounders. However, in the context of continual learning, where a model learns continuously from new data over time without forgetting, learning feature representations that are invariant to confounders remains a significant challenge. To remove their influence from intermediate feature representations, we introduce the Recursive MDN (R-MDN) layer, which can be integrated into any deep learning architecture, including vision transformers, and at any model stage. R-MDN performs statistical regression via the recursive least squares algorithm to maintain and continually update an internal model state with respect to changing distributions of data and confounding variables. Our experiments demonstrate that R-MDN promotes equitable predictions across population groups, both within static learning and across different stages of continual learning, by reducing catastrophic forgetting caused by confounder effects changing over time.

    View details for DOI 10.1609/aaai.v32i1.11792

    View details for PubMedID 41574232

    View details for PubMedCentralID PMC12823023