Bio


I have wide interests in AI research and applications plus computer programming in general. What I'm working towards: explainable AI, responsible and ethical use of automation, especially in the biomedical field. 

Graduated with BSc in physics and mathematics and PhD (IGS and computer science department) from Nanyang Technological University (Alibaba-NTU Talent). I was also a CN Yang Scholar.

Other things about me: (1) have experiences in web development (2) am always looking for opportunity to start a business with novel products (3) love to learn different languages.

Stanford Advisors


Current Research and Scholarly Interests


I'm working on explainable artificial intelligence (explainable AI, XAI), healthcare analytics and machine learning in general.

I design and study deep learning models with humanly understandable concepts. Large models are powerful, enabling large scale transformations in many aspects of the society. In particular, we aim to improve the efficiency and availability of healthcare to the public. However, large models can be difficult to understand, and we are trying to improve their transparency one part at a time.

I also conduct research on understanding these complex models. With various post-hoc methods and probes, we seek to understand the inner working of an AI model. With better understanding, we can better align our priorities when we consider the decisions made by an AI system.

Objectives: to achieve transparency and responsible use of automated systems.

Lab Affiliations


All Publications


  • Self reward design with fine-grained interpretability SCIENTIFIC REPORTS Tjoa, E., Guan, C. 2023; 13 (1): 1638

    Abstract

    The black-box nature of deep neural networks (DNN) has brought to attention the issues of transparency and fairness. Deep Reinforcement Learning (Deep RL or DRL), which uses DNN to learn its policy, value functions etc, is thus also subject to similar concerns. This paper proposes a way to circumvent the issues through the bottom-up design of neural networks with detailed interpretability, where each neuron or layer has its own meaning and utility that corresponds to humanly understandable concept. The framework introduced in this paper is called the Self Reward Design (SRD), inspired by the Inverse Reward Design, and this interpretable design can (1) solve the problem by pure design (although imperfectly) and (2) be optimized like a standard DNN. With deliberate human designs, we show that some RL problems such as lavaland and MuJoCo can be solved using a model constructed with standard NN components with few parameters. Furthermore, with our fish sale auction example, we demonstrate how SRD is used to address situations that will not make sense if black-box models are used, where humanly-understandable semantic-based decision is required.

    View details for DOI 10.1038/s41598-023-28804-9

    View details for Web of Science ID 000984271700048

    View details for PubMedID 36717641

    View details for PubMedCentralID PMC9886969