All Publications


  • Reinforced Causal Explainer for Graph Neural Networks. IEEE transactions on pattern analysis and machine intelligence Wang, X., Wu, Y., Zhang, A., Feng, F., He, X., Chua, T. 2022; PP

    Abstract

    Explainability is crucial for probing graph neural networks (GNNs), answering questions like Why the GNN model makes a certain prediction. Feature attribution is a prevalent technique of highlighting the explanatory subgraph in the input graph which plausibly leads the GNN model to make its prediction. However, the existing attribution methods largely make an untenable assumption the selected edges are linearly independent, without considering their dependencies, especially their coalition effect. We demonstrate unambiguous drawbacks of this assumption making the explanatory subgraph unfaithful and verbose. To address this challenge, we propose a reinforcement learning agent, Reinforced Causal Explainer (RC-Explainer). It frames the explanation task as a sequential decision process an explanatory subgraph is successively constructed by adding a salient edge to connect the previously selected subgraph. Technically, its policy network predicts the action of edge addition, and gets a reward that quanties the actions causal effect on the prediction. Such reward accounts for the dependency of the newly added edge and the previously added edges, thus reecting whether they collaborate together and form a coalition to pursue better explanations. As such, RC-Explainer can generate faithful and concise explanations, and has a better generalization power to unseen graphs.

    View details for DOI 10.1109/TPAMI.2022.3170302

    View details for PubMedID 35471869