I am a second-year Masters student in the ICME data science program. Prior to joining Stanford, I studied mathematics and computer science at Ecole Centrale Paris. My research interests include computer vision, natural language processing and, more specifically, multimodal analysis. My previous research was focused on cross-modal information retrieval (image annotation and automated text-illustration). I am currently working on information extraction from semi-structured data (pdf tables) within the Hazy Research group led by Prof. Ré at Stanford.

All Publications

  • Hyperbolic Graph Convolutional Neural Networks Chami, I., Ying, R., Re, C., Leskovec, J., Wallach, H., Larochelle, H., Beygelzimer, A., d'Alche-Buc, F., Fox, E., Garnett, R. NEURAL INFORMATION PROCESSING SYSTEMS (NIPS). 2019
  • Hyperbolic Graph Convolutional Neural Networks. Advances in neural information processing systems Chami, I. n., Ying, R. n., Ré, C. n., Leskovec, J. n. 2019; 32: 4869–80


    Graph convolutional neural networks (GCNs) embed nodes in a graph into Euclidean space, which has been shown to incur a large distortion when embedding real-world graphs with scale-free or hierarchical structure. Hyperbolic geometry offers an exciting alternative, as it enables embeddings with much smaller distortion. However, extending GCNs to hyperbolic geometry presents several unique challenges because it is not clear how to define neural network operations, such as feature transformation and aggregation, in hyperbolic space. Furthermore, since input features are often Euclidean, it is unclear how to transform the features into hyperbolic embeddings with the right amount of curvature. Here we propose Hyperbolic Graph Convolutional Neural Network (HGCN), the first inductive hyperbolic GCN that leverages both the expressiveness of GCNs and hyperbolic geometry to learn inductive node representations for hierarchical and scale-free graphs. We derive GCNs operations in the hyperboloid model of hyperbolic space and map Euclidean input features to embeddings in hyperbolic spaces with different trainable curvature at each layer. Experiments demonstrate that HGCN learns embeddings that preserve hierarchical structure, and leads to improved performance when compared to Euclidean analogs, even with very low dimensional embeddings: compared to state-of-the-art GCNs, HGCN achieves an error reduction of up to 63.1% in ROC AUC for link prediction and of up to 47.5% in F1 score for node classification, also improving state-of-the art on the Pubmed dataset.

    View details for PubMedID 32256024

  • Referring Relationships Krishna, R., Chami, I., Bernstein, M., Li Fei-Fei, IEEE IEEE. 2018: 6867–76