I am a PhD student focusing on theoretical and mathematical biophysics, with a particular eye towards biological optimization problems. If anything in my research description piques your interest, please do not hesitate to reach out. I'm always eager to forge new collaborations.
Education & Certifications
B.S., Washington University in St. Louis, Systems Engineering (2018)
B.S., Washington University in St. Louis, Computer Science (2018)
B.S., Whitworth University, Biophysics (2018)
Current Research and Scholarly Interests
My research interests are in theoretical and mathematical biophysics, with a focus on biological optimization. Biological systems routinely and effectively solve massively complex optimization problems in evolution, learning, optimal control, cell signalling, and other areas vital to life's success. Though excellent work has been done on all of these topics, more thorough quantitative descriptions of many of these processes are still needed. There remains a great deal to explore.
In my past and current research, I draw on tools from physics, mathematics, systems engineering, computer science, and optimization theory. I am particularly excited by problems in which these fields can engage in cross-talk, especially when there has been little such interaction in the past.
My current research focuses on modeling the evolution of human diseases like antibiotic-resistant bacteria and cancer. I am especially excited by opportunities where such theoretical efforts can be readily applied to helping solve urgent medical problems.
Though my current research is focused on the areas described above, in the past I've worked on such diverse areas as clinical data analysis (with Dr. Wade Shrader of Phoenix Children's Hospital), computational drug design (with Dr. Matthew Jacobson of UCSF), quantum optical devices (with Dr. Alejandro Rodriguez of Princeton), and algorithms for optimizing spiking neural networks (with Dr. Benjamin Moseley of Carnegie Mellon and Dr. Ralf Wessel of Washington University St. Louis).
Pre-Synaptic Pool Modification (PSPM): A supervised learning procedure for recurrent spiking neural networks.
2020; 15 (2): e0229083
Learning synaptic weights of spiking neural network (SNN) models that can reproduce target spike trains from provided neural firing data is a central problem in computational neuroscience and spike-based computing. The discovery of the optimal weight values can be posed as a supervised learning task wherein the weights of the model network are chosen to maximize the similarity between the target spike trains and the model outputs. It is still largely unknown whether optimizing spike train similarity of highly recurrent SNNs produces weight matrices similar to those of the ground truth model. To this end, we propose flexible heuristic supervised learning rules, termed Pre-Synaptic Pool Modification (PSPM), that rely on stochastic weight updates in order to produce spikes within a short window of the desired times and eliminate spikes outside of this window. PSPM improves spike train similarity for all-to-all SNNs and makes no assumption about the post-synaptic potential of the neurons or the structure of the network since no gradients are required. We test whether optimizing for spike train similarity entails the discovery of accurate weights and explore the relative contributions of local and homeostatic weight updates. Although PSPM improves similarity between spike trains, the learned weights often differ from the weights of the ground truth model, implying that connectome inference from spike data may require additional constraints on connectivity statistics. We also find that spike train similarity is sensitive to local updates, but other measures of network activity such as avalanche distributions, can be learned through synaptic homeostasis.
View details for DOI 10.1371/journal.pone.0229083
View details for PubMedID 32092107