What Does my GNN Really Capture? On Exploring Internal GNN Representations

From LRDE

Abstract

GNNs are efficient for classifying graphs but their internal workings is opaque which limits their field of application. Existing methods for explaining GNN focus on disclosing the relationships between input graphs and the model's decision. In contrary, the method we propose isolates internal features, hidden in the network layerswhich are automatically identified by the GNN to classify graphs. We show that this method makes it possible to know the parts of the input graphs used by GNN with much less bias than the SOTA methods and therefore to provide confidence in the decision process.


Bibtex (lrde.bib)

@InProceedings{	  veyrin-forrer.22.ijcai,
  title		= {What Does my {GNN} Really Capture? {O}n Exploring Internal
		  GNN Representations},
  author	= {Luca Veyrin-Forrer and Ataollah Kamal and Stefan Duffner
		  and Marc Plantevit and C\'eline Robardet},
  booktitle	= {International Joint Conference on Artificial Intelligence
		  2022},
  year		= {2022},
  month		= jul,
  hal_id	= {hal-03700710},
  pages		= {747--752},
  abstract	= {GNNs are efficient for classifying graphs but their
		  internal workings is opaque which limits their field of
		  application. Existing methods for explaining GNN focus on
		  disclosing the relationships between input graphs and the
		  model's decision. In contrary, the method we propose
		  isolates internal features, hidden in the network layers,
		  which are automatically identified by the GNN to classify
		  graphs. We show that this method makes it possible to know
		  the parts of the input graphs used by GNN with much less
		  bias than the SOTA methods and therefore to provide
		  confidence in the decision process.},
  publisher	= {ijcai.org},
  doi		= {https://doi.org/10.24963/ijcai.2022/105}
}