On GNN Explainability with Activation Rules

From LRDE

Revision as of 11:53, 15 December 2022 by Bot (talk | contribs)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)

Abstract

GNNs are powerful models based on node representation learning that perform particularly well in many machine learning problems related to graphs. The major obstacle to the deployment of GNNs is mostly a problem of societal acceptability and trustworthiness, properties which require making explicit the internal functioning of such models. Here, we propose to mine activation rules in the hidden layers to understand how the GNNs perceive the world. The problem is not to discover activation rules that are individually highly discriminating for an output of the model. Instead, the challenge is to provide a small set of rules that cover all input graphs. To this end, we introduce the subjective activation pattern domain. We define an effective and principled algorithm to enumerate activations rules in each hidden layer. The proposed approach for quantifying the interest of these rules is rooted in information theory and is able to account for background knowledge on the input graph data. The activation rules can then be redescribed thanks to pattern languages involving interpretable features. We show that the activation rules provide insights on the characteristics used by the GNN to classify the graphs. Especially, this allows to identify the hidden features built by the GNN through its different layers. Also, these rules can subsequently be used for explaining GNN decisions. Experiments on both synthetic and real-life datasets show highly competitive performance, with up to 200% improvement in fidelity on explaining graph classification over the SOTA methods.


Bibtex (lrde.bib)

@Article{	  veyrin-forrer.22.dami,
  author	= {Veyrin-Forrer, Luca and Kamal, Ataollah and Duffner,
		  Stefan and Plantevit, Marc and Robardet, C{\'e}line},
  journal	= {Data Mining and Knowledge Discovery},
  title		= {On {GNN} Explainability with Activation Rules},
  year		= {2022},
  month		= oct,
  pages		= {1-35},
  optvolume	= {???},
  optnumber	= {xxx},
  publisher	= {Springer},
  doi		= {10.1007/s10618-022-00870-z},
  abstract	= {GNNs are powerful models based on node representation
		  learning that perform particularly well in many machine
		  learning problems related to graphs. The major obstacle to
		  the deployment of GNNs is mostly a problem of societal
		  acceptability and trustworthiness, properties which require
		  making explicit the internal functioning of such models.
		  Here, we propose to mine activation rules in the hidden
		  layers to understand how the GNNs perceive the world. The
		  problem is not to discover activation rules that are
		  individually highly discriminating for an output of the
		  model. Instead, the challenge is to provide a small set of
		  rules that cover all input graphs. To this end, we
		  introduce the subjective activation pattern domain. We
		  define an effective and principled algorithm to enumerate
		  activations rules in each hidden layer. The proposed
		  approach for quantifying the interest of these rules is
		  rooted in information theory and is able to account for
		  background knowledge on the input graph data. The
		  activation rules can then be redescribed thanks to pattern
		  languages involving interpretable features. We show that
		  the activation rules provide insights on the
		  characteristics used by the GNN to classify the graphs.
		  Especially, this allows to identify the hidden features
		  built by the GNN through its different layers. Also, these
		  rules can subsequently be used for explaining GNN
		  decisions. Experiments on both synthetic and real-life
		  datasets show highly competitive performance, with up to
		  200\% improvement in fidelity on explaining graph
		  classification over the SOTA methods.}
}