Bio


I'm a first-year CS Ph.D. student at Stanford. I'm advised by Prof. Jiajun Wu and affiliated with the Stanford Vision and Learning Lab. My research lies at the intersection between Graphics, 3D Vision, and Machine Learning. Specifically, I'm currently interested in physical scene understanding by inverting graphics engines.

Previously, I got my bachelor's degree in Computer Science from Zhejiang University in 2023, with an honors degree from Chu Kochen Honors College. During my undergraduate, I was fortunate to work closely with Prof. Xiaowei Zhou, Prof. Sida Peng and Prof. Jiajun Wu on several research projects.

You can find more information on my homepage: https://chen-geng.com

Education & Certifications


  • B.Eng.(Honours), Zhejiang University, Computer Science (2023)

All Publications


  • Implicit Neural Representations With Structured Latent Codes for Human Body Modeling. IEEE transactions on pattern analysis and machine intelligence Peng, S., Geng, C., Zhang, Y., Xu, Y., Wang, Q., Shuai, Q., Zhou, X., Bao, H. 2023; 45 (8): 9895-9907

    Abstract

    This paper addresses the challenge of novel view synthesis for a human performer from a very sparse set of camera views. Some recent works have shown that learning implicit neural representations of 3D scenes achieves remarkable view synthesis quality given dense input views. However, the representation learning will be ill-posed if the views are highly sparse. To solve this ill-posed problem, our key idea is to integrate observations over video frames. To this end, we propose Neural Body, a new human body representation which assumes that the learned neural representations at different frames share the same set of latent codes anchored to a deformable mesh, so that the observations across frames can be naturally integrated. The deformable mesh also provides geometric guidance for the network to learn 3D representations more efficiently. Furthermore, we combine Neural Body with implicit surface models to improve the learned geometry. To evaluate our approach, we perform experiments on both synthetic and real-world data, which show that our approach outperforms prior works by a large margin on novel view synthesis and 3D reconstruction. We also demonstrate the capability of our approach to reconstruct a moving person from a monocular video on the People-Snapshot dataset.

    View details for DOI 10.1109/TPAMI.2023.3245815

    View details for PubMedID 37027766

  • Tree-Structured Shading Decomposition Geng, C., Yu, H., Zhang, S., Agrawala, M., Wu, J., IEEE IEEE COMPUTER SOC. 2023: 488-498
  • Learning neural volumetric representations of dynamic humans in minutes Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Geng, C., Peng, S., Xu, Z., Bao, H., Zhou, X. 2023: 8759-8770
  • Novel View Synthesis of Human Interactions from Sparse Multi-view Videos SIGGRAPH '22: ACM SIGGRAPH 2022 Conference Proceedings Shuai, Q., Geng, C., Fang, Q., Peng, S., Shen, W., Zhou, X., Bao, H. 2022

    View details for DOI 10.1145/3528233.3530704