Professional Education


  • Doctor of Philosophy, Columbia University (2018)
  • Bachelor of Arts, Harvard University (2009)
  • Master of Philosophy, Columbia University (2016)
  • Master of Science, Columbia University (2012)

All Publications


  • On the Downstream Performance of Compressed Word Embeddings. Advances in neural information processing systems May, A., Zhang, J., Dao, T., Re, C. 2019; 32: 11782–93

    Abstract

    Compressing word embeddings is important for deploying NLP models in memory-constrained settings. However, understanding what makes compressed embeddings perform well on downstream tasks is challenging-existing measures of compression quality often fail to distinguish between embeddings that perform well and those that do not. We thus propose the eigenspace overlap score as a new measure. We relate the eigenspace overlap score to downstream performance by developing generalization bounds for the compressed embeddings in terms of this score, in the context of linear and logistic regression. We then show that we can lower bound the eigenspace overlap score for a simple uniform quantization compression method, helping to explain the strong empirical performance of this method. Finally, we show that by using the eigenspace overlap score as a selection criterion between embeddings drawn from a representative set we compressed, we can efficiently identify the better performing embedding with up to 2* lower selection error rates than the next best measure of compression quality, and avoid the cost of training a model for each task of interest.

    View details for PubMedID 31885428

  • Low-Precision Random Fourier Features for Memory-Constrained Kernel Approximation. Proceedings of machine learning research Zhang, J., May, A., Dao, T., Ré, C. 2019; 89: 1264–74

    Abstract

    We investigate how to train kernel approximation methods that generalize well under a memory budget. Building on recent theoretical work, we define a measure of kernel approximation error which we find to be more predictive of the empirical generalization performance of kernel approximation methods than conventional metrics. An important consequence of this definition is that a kernel approximation matrix must be high rank to attain close approximation. Because storing a high-rank approximation is memory intensive, we propose using a low-precision quantization of random Fourier features (LP-RFFs) to build a high-rank approximation under a memory budget. Theoretically, we show quantization has a negligible effect on generalization performance in important settings. Empirically, we demonstrate across four benchmark datasets that LP-RFFs can match the performance of full-precision RFFs and the Nyström method, with 3x-10x and 50x-460x less memory, respectively.

    View details for PubMedID 31777846

  • Kernel Approximation Methods for Speech Recognition JOURNAL OF MACHINE LEARNING RESEARCH May, A., Garakani, A., Lu, Z., Guo, D., Liu, K., Bellet, A., Fan, L., Collins, M., Hsu, D., Kingsbury, B., Picheny, M., Sha, F. 2019; 20