Bio


I am a PhD student focusing on machine learning, with a particular eye towards problems relevant to oncology. If anything in my research description piques your interest, please do not hesitate to reach out. I'm always eager to forge new collaborations.

Education & Certifications


  • B.S., Washington University in St. Louis, Systems Engineering (2018)
  • B.S., Washington University in St. Louis, Computer Science (2018)
  • B.S., Whitworth University, Biophysics (2018)

Current Research and Scholarly Interests


I am a PhD candidate in Biophysics at Stanford University, and work with Olivier Gevaert in the Department of Biomedical Data Science and the Center for Artificial Intelligence in Medicine and Imaging.

My research interests are broadly in the development of theoretical and applied solutions to problems at the intersection of machine learning and medicine, with a particular eye towards data fusion and problems in cancer diagnosis and treatment.

Previously I've worked in such areas as theoretical biophysics (with Shamit Kachru at Stanford), algorithms for computational neuroscience (with Ralf Wessel and Benjamin Moseley at WUStL), inverse optimization (with Alejandro Rodriguez at Princeton), and computational drug design (with Matt Jacobson at UCSF).

I graduated from the dual degree program at Washington University in St. Louis in 2018, with three B.S. degrees respectively in physics (with a biophysics concentration), computer science, and systems engineering.

All Publications


  • Pre-Synaptic Pool Modification (PSPM): A supervised learning procedure for recurrent spiking neural networks. PloS one Bagley, B. A., Bordelon, B., Moseley, B., Wessel, R. 2020; 15 (2): e0229083

    Abstract

    Learning synaptic weights of spiking neural network (SNN) models that can reproduce target spike trains from provided neural firing data is a central problem in computational neuroscience and spike-based computing. The discovery of the optimal weight values can be posed as a supervised learning task wherein the weights of the model network are chosen to maximize the similarity between the target spike trains and the model outputs. It is still largely unknown whether optimizing spike train similarity of highly recurrent SNNs produces weight matrices similar to those of the ground truth model. To this end, we propose flexible heuristic supervised learning rules, termed Pre-Synaptic Pool Modification (PSPM), that rely on stochastic weight updates in order to produce spikes within a short window of the desired times and eliminate spikes outside of this window. PSPM improves spike train similarity for all-to-all SNNs and makes no assumption about the post-synaptic potential of the neurons or the structure of the network since no gradients are required. We test whether optimizing for spike train similarity entails the discovery of accurate weights and explore the relative contributions of local and homeostatic weight updates. Although PSPM improves similarity between spike trains, the learned weights often differ from the weights of the ground truth model, implying that connectome inference from spike data may require additional constraints on connectivity statistics. We also find that spike train similarity is sensitive to local updates, but other measures of network activity such as avalanche distributions, can be learned through synaptic homeostasis.

    View details for DOI 10.1371/journal.pone.0229083

    View details for PubMedID 32092107