Professional Education


  • Doctor of Philosophy, Harvard University (2022)
  • Ph.D., Harvard University (2022)
  • DMD, MS, Sichuan University (2014)

Stanford Advisors


Lab Affiliations


All Publications


  • Deep learning-enabled 3D multimodal fusion of cone-beam CT and intraoral mesh scans for clinically applicable tooth-bone reconstruction. Patterns (New York, N.Y.) Liu, J., Hao, J., Lin, H., Pan, W., Yang, J., Feng, Y., Wang, G., Li, J., Jin, Z., Zhao, Z., Liu, Z. 2023; 4 (9): 100825

    Abstract

    High-fidelity three-dimensional (3D) models of tooth-bone structures are valuable for virtual dental treatment planning; however, they require integrating data from cone-beam computed tomography (CBCT) and intraoral scans (IOS) using methods that are either error-prone or time-consuming. Hence, this study presents Deep Dental Multimodal Fusion (DDMF), an automatic multimodal framework that reconstructs 3D tooth-bone structures using CBCT and IOS. Specifically, the DDMF framework comprises CBCT and IOS segmentation modules as well as a multimodal reconstruction module with novel pixel representation learning architectures, prior knowledge-guided losses, and geometry-based 3D fusion techniques. Experiments on real-world large-scale datasets revealed that DDMF achieved superior segmentation performance on CBCT and IOS, achieving a 0.17mm average symmetric surface distance (ASSD) for 3D fusion with a substantial processing time reduction. Additionally, clinical applicability studies have demonstrated DDMF's potential for accurately simulating tooth-bone structures throughout the orthodontic treatment process.

    View details for DOI 10.1016/j.patter.2023.100825

    View details for PubMedID 37720330

  • Hierarchical Self-Supervised Learning for 3D Tooth Segmentation in Intra-Oral Mesh Scans IEEE TRANSACTIONS ON MEDICAL IMAGING Liu, Z., He, X., Wang, H., Xiong, H., Zhang, Y., Wang, G., Hao, J., Feng, Y., Zhu, F., Hu, H. 2023; 42 (2): 467-480

    Abstract

    Accurately delineating individual teeth and the gingiva in the three-dimension (3D) intraoral scanned (IOS) mesh data plays a pivotal role in many digital dental applications, e.g., orthodontics. Recent research shows that deep learning based methods can achieve promising results for 3D tooth segmentation, however, most of them rely on high-quality labeled dataset which is usually of small scales as annotating IOS meshes requires intensive human efforts. In this paper, we propose a novel self-supervised learning framework, named STSNet, to boost the performance of 3D tooth segmentation leveraging on large-scale unlabeled IOS data. The framework follows two-stage training, i.e., pre-training and fine-tuning. In pre-training, three hierarchical-level, i.e., point-level, region-level, cross-level, contrastive losses are proposed for unsupervised representation learning on a set of predefined matched points from different augmented views. The pre-trained segmentation backbone is further fine-tuned in a supervised manner with a small number of labeled IOS meshes. With the same amount of annotated samples, our method can achieve an mIoU of 89.88%, significantly outperforming the supervised counterparts. The performance gain becomes more remarkable when only a small amount of labeled samples are available. Furthermore, STSNet can achieve better performance with only 40% of the annotated samples as compared to the fully supervised baselines. To the best of our knowledge, we present the first attempt of unsupervised pre-training for 3D tooth segmentation, demonstrating its strong potential in reducing human efforts for annotation and verification.

    View details for DOI 10.1109/TMI.2022.3222388

    View details for Web of Science ID 000934156000013

    View details for PubMedID 36378797

  • A Multimodal and Multifunctional CMOS Cellular Interfacing Array for Digital Physiology and Pathology Featuring an Ultra Dense Pixel Array and Reconfigurable Sampling Rate Wang, A. Y., Sheng, Y., Li, W., Jung, D., Junek, G. V., Liu, H., Park, J., Lee, D., Wang, M., Maharjan, S., Kumashi, S., Hao, J., Zhang, Y. S., Eggan, K., Wang, H. IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC. 2022: 1057-1074

    Abstract

    The article presents a fully integrated multimodal and multifunctional CMOS biosensing/actuating array chip and system for multi-dimensional cellular/tissue characterization. The CMOS chip supports up to 1,568 simultaneous parallel readout channels across 21,952 individually addressable multimodal pixels with 13 µm × 13 µm 2-D pixel pitch along with 1,568 Pt reference electrodes. These features allow the CMOS array chip to perform multimodal physiological measurements on living cell/tissue samples with both high throughput and single-cell resolution. Each pixel supports three sensing and one actuating modalities, each reconfigurable for different functionalities, in the form of full array (FA) or fast scan (FS) voltage recording schemes, bright/dim optical detection, 2-/4-point impedance sensing (ZS), and biphasic current stimulation (BCS) with adjustable stimulation area for single-cell or tissue-level stimulation. Each multi-modal pixel contains an 8.84 μm × 11 μm Pt electrode, 4.16 μm × 7.2 μm photodiode (PD), and in-pixel circuits for PD measurements and pixel selection. The chip is fabricated in a standard 130nm BiCMOS process as a proof of concept. The on-chip electrodes are constructed by unique design and in-house post-CMOS fabrication processes, including a critical Al shorting of all pixels during fabrication and Al etching after fabrication that ensures a high-yield planar electrode array on CMOS with high biocompatibility and long-term measurement reliability. For demonstration, extensive biological testing is performed with human and mouse progenitor cells, in which multidimensional biophysiological data are acquired for comprehensive cellular characterization.

    View details for DOI 10.1109/TBCAS.2022.3224064

    View details for Web of Science ID 000935332200007

    View details for PubMedID 36417722

  • Molecularly cleavable bioinks facilitate high-performance digital light processing-based bioprinting of functional volumetric soft tissues NATURE COMMUNICATIONS Wang, M., Li, W., Hao, J., Gonzales, A., Zhao, Z., Flores, R., Kuang, X., Mu, X., Ching, T., Tang, G., Luo, Z., Garciamendez-Mijares, C., Sahoo, J., Wells, M. F., Niu, G., Agrawal, P., Quinones-Hinojosa, A., Eggan, K., Zhang, Y. 2022; 13 (1): 3317

    Abstract

    Digital light processing bioprinting favors biofabrication of tissues with improved structural complexity. However, soft-tissue fabrication with this method remains a challenge to balance the physical performances of the bioinks for high-fidelity bioprinting and suitable microenvironments for the encapsulated cells to thrive. Here, we propose a molecular cleavage approach, where hyaluronic acid methacrylate (HAMA) is mixed with gelatin methacryloyl to achieve high-performance bioprinting, followed by selectively enzymatic digestion of HAMA, resulting in tissue-matching mechanical properties without losing the structural complexity and fidelity. Our method allows cellular morphological and functional improvements across multiple bioprinted tissue types featuring a wide range of mechanical stiffness, from the muscles to the brain, the softest organ of the human body. This platform endows us to biofabricate mechanically precisely tunable constructs to meet the biological function requirements of target tissues, potentially paving the way for broad applications in tissue and tissue model engineering.

    View details for DOI 10.1038/s41467-022-31002-2

    View details for Web of Science ID 000809423400019

    View details for PubMedID 35680907

    View details for PubMedCentralID PMC9184597