Bryce Allen Bagley
MD Student with Scholarly Concentration in Bioengineering, expected graduation Spring 2026
Bio
I am an MD candidate with a background in physics, systems science & engineering, and computer science, which I apply to the quantitative study of complex biological systems.
My research interests are in the development of novel mathematical and machine learning methods for interpreting and understanding biomedical data. Few things make my ears perk up as much as variants of the phrase, "we have all this data, but aren't quite sure what to do with it."
I am additionally interested in medical ethics, and mathematics education within the medical community and the biosciences.
Honors & Awards
-
Futures in Neurologic Research, American Academy of Neurology (2022-2023)
-
Graduate Scholar in Residence, El Centro Chicano y Latino, Stanford University (2020-2021)
-
Tau Beta Pi Member, Tau Beta Pi Honor Society (2017-)
-
ADVANCE Fellow, Stanford University (2018)
-
Boeing Scholar, The Boeing Company (2016-2018)
-
Harold P. Brown Engineering Fellowship, Washington University in St. Louis McKelvey School of Engineering (2016-2018)
-
Goldwater Scholarship Honorable Mention, Barry Goldwater Scholarship Foundation (2016)
Education & Certifications
-
M.S., Stanford University, (Theoretical) Biophysics (2021)
-
B.S., Washington University in St. Louis, Systems Engineering (2018)
-
B.S., Washington University in St. Louis, Computer Science (2018)
-
B.S., WUStL/Whitworth dual-degree program, Biophysics (2018)
Current Research and Scholarly Interests
Complex systems science, biophysics, and machine learning.
Lab Affiliations
All Publications
-
BIOPROCESSING OF SURGICAL PEDIATRIC BRAIN TUMOR SPECIMENS FOR GENOME-GUIDED PERSONALIZED DRUG TESTING
OXFORD UNIV PRESS INC. 2023
View details for DOI 10.1093/neuonc/noad073.039
View details for Web of Science ID 001023504300040
-
Generative Editing via Convolutional Obscuring (GECO): A Generative Adversarial Network for MRI de-artifacting
LIPPINCOTT WILLIAMS & WILKINS. 2023
View details for DOI 10.1212/WNL.0000000000204150
View details for Web of Science ID 001053672106056
-
Biophysical cybernetics of directed evolution and eco-evolutionary dynamics
arXiv
2023
View details for DOI 10.48550/arXiv.2305.03340
-
Generative Editing via Convolutional Obscuring (GECO): A Generative Adversarial Network for MRI de-artifacting
medRxiv
2022
View details for DOI 10.1101/2022.09.21.22280206
-
Pre-Synaptic Pool Modification (PSPM): A supervised learning procedure for recurrent spiking neural networks.
PloS one
2020; 15 (2): e0229083
Abstract
Learning synaptic weights of spiking neural network (SNN) models that can reproduce target spike trains from provided neural firing data is a central problem in computational neuroscience and spike-based computing. The discovery of the optimal weight values can be posed as a supervised learning task wherein the weights of the model network are chosen to maximize the similarity between the target spike trains and the model outputs. It is still largely unknown whether optimizing spike train similarity of highly recurrent SNNs produces weight matrices similar to those of the ground truth model. To this end, we propose flexible heuristic supervised learning rules, termed Pre-Synaptic Pool Modification (PSPM), that rely on stochastic weight updates in order to produce spikes within a short window of the desired times and eliminate spikes outside of this window. PSPM improves spike train similarity for all-to-all SNNs and makes no assumption about the post-synaptic potential of the neurons or the structure of the network since no gradients are required. We test whether optimizing for spike train similarity entails the discovery of accurate weights and explore the relative contributions of local and homeostatic weight updates. Although PSPM improves similarity between spike trains, the learned weights often differ from the weights of the ground truth model, implying that connectome inference from spike data may require additional constraints on connectivity statistics. We also find that spike train similarity is sensitive to local updates, but other measures of network activity such as avalanche distributions, can be learned through synaptic homeostasis.
View details for DOI 10.1371/journal.pone.0229083
View details for PubMedID 32092107
- A Heuristic Approach to Spiking Neural Networks Washington University Office of Undergraduate Research Digest 2018; 13 (11)
- Inverse Design of Optimal Nonlinear Photonic Structures MIRTHE Research Conference 2016
- A Theoretical Study of Binding Dynamics in TLX NR2E1 Ligand-Binding Domain UCSF SRTP Research Symposium 2015
- Increasing the Pepsin Resistance of a Prolyl Endopeptidase: Lessons Learned in the Prediction of Mutation Sites Spokane Intercollegiate Research Conference 2014