Bio


Dr. Brian Trippe is an assistant professor at Stanford in the Department of Statistics, with an affiliation in Stanford Data Science.

In his research, Dr. Trippe develops probabilistic machine learning methods to address challenges in biotechnology and medicine. Recently, his focus has been on generative modeling and inference algorithms for protein engineering.

Before joining Stanford, Dr. Trippe was a postdoctoral fellow at Columbia University in the Department of Statistics, and a visiting researcher at the Institute for Protein Design at the University of Washington.

Academic Appointments


Honors & Awards


  • Summa cum laude, Columbia College (2016)
  • NSF Graduate Research Fellowship, National Science Foundation (2018-2022)
  • Euretta J. Kellett Fellowship, Support from Columbia College for study at the University of Cambridge (2016-2017)

Professional Education


  • Ph.D., Massachusetts Institute of Technology, Computational and Systems Biology (2022)
  • M.Phil., University of Cambridge, Machine Learning (2017)
  • B.A., scl, Columbia College, Biochemistry, Computer Science (2016)

Patents


  • Brian Trippe. "United States Patent US20230160824A1 Unbiased sorting and sequencing of objects via randomized gating schemes", Microsoft Corporation, May 25, 2023
  • Michelle Therese Hoerner Dimon, Marc Berndl, Marc Adlai Coram, Brian Trippe, Patrick F Riley, Philip Charles Nelson. "United States Patent US10546650B2 Neural network for processing aptamer data", Google, Jan 28, 2020

2025-26 Courses


Stanford Advisees


All Publications


  • Calibrating Generative Models to Distributional Constraints Smith, H. D., Diamant, N. L., Trippe, B. L. arXiv. 2025
  • MotifBench: A standardized protein design benchmark for motif-scaffolding problems Zheng, Z., Zhang, B., Didi, K., Yang, K. K., Yim, J., Watson, J., Chen, H., Trippe, B. L. arXiv. 2025
  • Predicting mutational effects on protein binding from folding energy International Conference on Machine Learning Arthur, D. L., Householder, K., Wu, F., Thrun, S., Garcia, K. C., Trippe, B. L. 2025
  • De novo design of protein structure and function with RFdiffusion. Nature Watson, J. L., Juergens, D., Bennett, N. R., Trippe, B. L., Yim, J., Eisenach, H. E., Ahern, W., Borst, A. J., Ragotte, R. J., Milles, L. F., Wicky, B. I., Hanikel, N., Pellock, S. J., Courbet, A., Sheffler, W., Wang, J., Venkatesh, P., Sappington, I., Torres, S. V., Lauko, A., De Bortoli, V., Mathieu, E., Ovchinnikov, S., Barzilay, R., Jaakkola, T. S., DiMaio, F., Baek, M., Baker, D. 2023; 620 (7976): 1089-1100

    Abstract

    There has been considerable recent progress in designing new proteins using deep-learning methods1-9. Despite this progress, a general deep-learning framework for protein design that enables solution of a wide range of design challenges, including de novo binder design and design of higher-order symmetric architectures, has yet to be described. Diffusion models10,11 have had considerable success in image and language generative modelling but limited success when applied to protein modelling, probably due to the complexity of protein backbone geometry and sequence-structure relationships. Here we show that by fine-tuning the RoseTTAFold structure prediction network on protein structure denoising tasks, we obtain a generative model of protein backbones that achieves outstanding performance on unconditional and topology-constrained protein monomer design, protein binder design, symmetric oligomer design, enzyme active site scaffolding and symmetric motif scaffolding for therapeutic and metal-binding protein design. We demonstrate the power and generality of the method, called RoseTTAFold diffusion (RFdiffusion), by experimentally characterizing the structures and functions of hundreds of designed symmetric assemblies, metal-binding proteins and protein binders. The accuracy of RFdiffusion is confirmed by the cryogenic electron microscopy structure of a designed binder in complex with influenza haemagglutinin that is nearly identical to the design model. In a manner analogous to networks that produce images from user-specified inputs, RFdiffusion enables the design of diverse functional proteins from simple molecular specifications.

    View details for DOI 10.1038/s41586-023-06415-8

    View details for PubMedID 37433327

    View details for PubMedCentralID PMC10468394

  • Leveraging polygenic enrichments of gene features to predict genes underlying complex traits and diseases. Nature genetics Weeks, E. M., Ulirsch, J. C., Cheng, N. Y., Trippe, B. L., Fine, R. S., Miao, J., Patwardhan, T. A., Kanai, M., Nasser, J., Fulco, C. P., Tashman, K. C., Aguet, F., Li, T., Ordovas-Montanes, J., Smillie, C. S., Biton, M., Shalek, A. K., Ananthakrishnan, A. N., Xavier, R. J., Regev, A., Gupta, R. M., Lage, K., Ardlie, K. G., Hirschhorn, J. N., Lander, E. S., Engreitz, J. M., Finucane, H. K. 2023

    Abstract

    Genome-wide association studies (GWASs) are a valuable tool for understanding the biology of complex human traits and diseases, but associated variants rarely point directly to causal genes. In the present study, we introduce a new method, polygenic priority score (PoPS), that learns trait-relevant gene features, such as cell-type-specific expression, to prioritize genes at GWAS loci. Using a large evaluation set of genes with fine-mapped coding variants, we show that PoPS and the closest gene individually outperform other gene prioritization methods, but observe the best overall performance by combining PoPS with orthogonal methods. Using this combined approach, we prioritize 10,642 unique gene-trait pairs across 113 complex traits and diseases with high precision, finding not only well-established gene-trait relationships but nominating new genes at unresolved loci, such as LGR4 for estimated glomerular filtration rate and CCR7 for deep vein thrombosis. Overall, we demonstrate that PoPS provides a powerful addition to the gene prioritization toolbox.

    View details for DOI 10.1038/s41588-023-01443-6

    View details for PubMedID 37443254

    View details for PubMedCentralID 5501872

  • Confidently Comparing Estimates with the c-value JOURNAL OF THE AMERICAN STATISTICAL ASSOCIATION Trippe, B. L., Deshpande, S. K., Broderick, T. 2024; 119 (546): 983-994
  • Gaussian Processes at the Helm(holtz): A More Fluid Model for Ocean Currents Berlinghieri, R., Trippe, B. L., Burt, D. R., Giordano, R., Srinivasan, K., Ozgokmen, T., Xia, J., Broderick, T. edited by Krause, A., Brunskill, E., Cho, K., Engelhardt, B., Sabato, S., Scarlett, J. JMLR-JOURNAL MACHINE LEARNING RESEARCH. 2023
  • Diffusion probabilistic modeling of protein backbones in 3D for the motif-scaffolding problem International Conference on Learning Representations Trippe, B. L., Yim, J., Tischer, D., Broderick, T., Baker, D., Barzilay, R., Jaakkola, T. 2023
  • Practical and Asymptotically Exact Conditional Sampling in Diffusion Models Wu, L., Trippe, B. L., Naesseth, C. A., Blei, D. M., Cunningham, J. P. edited by Oh, A., Neumann, T., Globerson, A., Saenko, K., Hardt, M., Levine, S. NEURAL INFORMATION PROCESSING SYSTEMS (NIPS). 2023
  • SE(3) diffusion model with application to protein backbone generation Yim, J., Trippe, B. L., De Bortoli, V., Mathieu, E., Doucet, A., Barzilay, R., Jaakkola, T. edited by Krause, A., Brunskill, E., Cho, K., Engelhardt, B., Sabato, S., Scarlett, J. JMLR-JOURNAL MACHINE LEARNING RESEARCH. 2023
  • Randomized gates eliminate bias in sort-seq assays PROTEIN SCIENCE Trippe, B. L., Huang, B., DeBenedictis, E. A., Coventry, B., Bhattacharya, N., Yang, K. K., Baker, D., Crawford, L. 2022; 31 (9)

    View details for DOI 10.1002/pro.4401

    View details for Web of Science ID 000847526000001

  • Many Processors, Little Time: MCMC for Partitions via Optimal Transport Couplings Nguyen, T. D., Trippe, B. L., Broderick, T. edited by Camps-Valls, G., Ruiz, F. J., Valera JMLR-JOURNAL MACHINE LEARNING RESEARCH. 2022
  • For high-dimensional hierarchical models, consider exchangeability of effects across covariates instead of across datasets Trippe, B. L., Finucane, H. K., Broderick, T. edited by Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P. S., Vaughan, J. W. NEURAL INFORMATION PROCESSING SYSTEMS (NIPS). 2021
  • LR-GLM: High-Dimensional Bayesian Inference Using Low-Rank Data Approximations Trippe, B. L., Huggins, J. H., Agrawal, R., Broderick, T. edited by Chaudhuri, K., Salakhutdinov, R. JMLR-JOURNAL MACHINE LEARNING RESEARCH. 2019
  • The Kernel Interaction Trick: Fast Bayesian Discovery of Pairwise Interactions in High Dimensions Agrawal, R., Huggins, J. H., Trippe, B., Broderick, T. edited by Chaudhuri, K., Salakhutdinov, R. JMLR-JOURNAL MACHINE LEARNING RESEARCH. 2019
  • Inhibition of cell fate repressors secures the differentiation of the touch receptor neurons of Caenorhabditis elegans. Development (Cambridge, England) Zheng, C., Jin, F. Q., Trippe, B. L., Wu, J., Chalfie, M. 2018; 145 (22)

    Abstract

    Terminal differentiation generates the specialized features and functions that allow postmitotic cells to acquire their distinguishing characteristics. This process is thought to be controlled by transcription factors called 'terminal selectors' that directly activate a set of downstream effector genes. In Caenorhabditis elegans, the differentiation of both the mechanosensory touch receptor neurons (TRNs) and the multidendritic nociceptor FLP neurons uses the terminal selectors UNC-86 and MEC-3. The FLP neurons fail to activate TRN genes, however, because a complex of two transcriptional repressors (EGL-44/EGL-46) prevents their expression. Here, we show that the ZEB family transcriptional factor ZAG-1 promotes TRN differentiation not by activating TRN genes but by preventing the expression of EGL-44/EGL-46. As EGL-44/EGL-46 also inhibits the production of ZAG-1, these proteins form a bistable, negative-feedback loop that regulates the choice between the two neuronal fates.

    View details for DOI 10.1242/dev.168096

    View details for PubMedID 30291162

    View details for PubMedCentralID PMC6262790

  • Overpruning in variational Bayesian neural networks NeurIPS Workshop on Advances in Approximate Bayesian Inference Trippe, B. L., Turner, R. E. 2017