# Methods for Explaining Top-N Recommendations Through Subgroup Discovery

### From LRDE

(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)

## Abstract

Explainable Artificial Intelligence (XAI) has received a lot of attention over the past decade, with the proposal of many methods explaining black box classifiers such as neural networks. Despite the ubiquity of recommender systems in the digital world, only few researchers have attempted to explain their functioning, whereas one major obstacle to their use is the problem of societal acceptability and trustworthiness. Indeed, recommender systems direct user choices to a large extent and their impact is important as they give access to only a small part of the range of items (e.g., products and/or services), as the submerged part of the iceberg. Consequently, they limit access to other resources. The potentially negative effects of these systems have been pointed out as phenomena like echo chambers and winner-take-all effects, because the internal logic of these systems is to likely enclose the consumer in a em deja vu loop. Therefore, it is crucial to provide explanations of such recommender systems and to identify the user data that led the respective system to make the individual recommendations. This then makes it possible to evaluate recommender systems not only regarding their effectiveness (i.e., their capability to recommend an item that was actually chosen by the user), but also with respect to the diversity, relevance and timeliness of the active data used for the recommendation. In this paper, we propose a deep analysis of two state-of-the-art models learnt on four datasets based on the identification of the items or the sequences of items actively used by the models. Our proposed methods are based on subgroup discovery with different pattern languages (i.e., itemsets and sequences). Specifically, we provide interpretable explanations of the recommendations of the Top-N itemswhich are useful to compare different models. Ultimatelythese can then be used to present simple and understandable patterns to explain the reasons behind a generated recommendation to the user.

## Bibtex (lrde.bib)

@Article{	  iferroudjene.22.dami,
author	= {Iferroudjene, Mouloud and Lonjarret, Corentin and
Robardet, C{\'e}line and Plantevit, Marc and Atzmueller,
Martin},
title		= {Methods for Explaining Top-{N} Recommendations Through
Subgroup Discovery},
journal	= {Data Mining and Knowledge Discovery},
publisher	= {Springer},
volume	= {313},
number	= {118752},
year		= {2022},
month		= nov,
doi		= {10.1007/s10618-022-00897-2},
keywords	= {Recommender Systems, Explainable AI (XAI), Subgroup
Discovery},
abstract	= {Explainable Artificial Intelligence (XAI) has received a
lot of attention over the past decade, with the proposal of
many methods explaining black box classifiers such as
neural networks. Despite the ubiquity of recommender
systems in the digital world, only few researchers have
attempted to explain their functioning, whereas one major
obstacle to their use is the problem of societal
acceptability and trustworthiness. Indeed, recommender
systems direct user choices to a large extent and their
part of the range of items (e.g., products and/or
services), as the submerged part of the iceberg.
potentially negative effects of these systems have been
pointed out as phenomena like echo chambers and
winner-take-all effects, because the internal logic of
these systems is to likely enclose the consumer in a {\em
deja vu} loop. Therefore, it is crucial to provide
explanations of such recommender systems and to identify
the user data that led the respective system to make the
individual recommendations. This then makes it possible to
evaluate recommender systems not only regarding their
effectiveness (i.e., their capability to recommend an item
that was actually chosen by the user), but also with
respect to the diversity, relevance and timeliness of the
active data used for the recommendation. In this paper, we
propose a deep analysis of two state-of-the-art models
learnt on four datasets based on the identification of the
items or the sequences of items actively used by the
models. Our proposed methods are based on subgroup
discovery with different pattern languages (i.e., itemsets
and sequences). Specifically, we provide interpretable
explanations of the recommendations of the Top-N items,
which are useful to compare different models. Ultimately,
these can then be used to present simple and understandable
patterns to explain the reasons behind a generated
recommendation to the user.}
}