Difference between revisions of "Publications/bouarour.22.ieeebigdata"

From LRDE

 
Line 8: Line 8:
 
| abstract = Diversity in recommendation has been studied extensively. It has been shown that maximizing diversity subject to constrained relevance yields high user engagement over time. Existing work largely relies on setting some attributes that are used to craft an item similarity function and diversify results. In this paper, we examine the question of learning diversity attributes. That is particularly important when users receive recommendations over multiple sessions. We devise two main approaches to look for the best diversity attribute in each session: the first is a generalization of traditional diversity algorithms and the second is based on reinforcement learning. We implement both approaches and run extensive experiments on a semi-synthetic dataset. Our results demonstrate that learning diversity attributes yields a higher overall diversity than traditional diversity algorithms. We also find that training policies using reinforcement learning is more efficient in terms of response time, in particular for high dimensional data.
 
| abstract = Diversity in recommendation has been studied extensively. It has been shown that maximizing diversity subject to constrained relevance yields high user engagement over time. Existing work largely relies on setting some attributes that are used to craft an item similarity function and diversify results. In this paper, we examine the question of learning diversity attributes. That is particularly important when users receive recommendations over multiple sessions. We devise two main approaches to look for the best diversity attribute in each session: the first is a generalization of traditional diversity algorithms and the second is based on reinforcement learning. We implement both approaches and run extensive experiments on a semi-synthetic dataset. Our results demonstrate that learning diversity attributes yields a higher overall diversity than traditional diversity algorithms. We also find that training policies using reinforcement learning is more efficient in terms of response time, in particular for high dimensional data.
 
| pages = 1 to 10
 
| pages = 1 to 10
  +
| publisher = IEEE
 
| lrdekeywords = IA
 
| lrdekeywords = IA
 
| lrdenewsdate = 2022-12-12
 
| lrdenewsdate = 2022-12-12
Line 46: Line 47:
 
pages = <nowiki>{</nowiki>1-10<nowiki>}</nowiki>,
 
pages = <nowiki>{</nowiki>1-10<nowiki>}</nowiki>,
 
doi = <nowiki>{</nowiki>10.1109/BigDataXXXX<nowiki>}</nowiki>,
 
doi = <nowiki>{</nowiki>10.1109/BigDataXXXX<nowiki>}</nowiki>,
  +
publisher = <nowiki>{</nowiki>IEEE<nowiki>}</nowiki>,
 
note = <nowiki>{</nowiki>accepted<nowiki>}</nowiki>
 
note = <nowiki>{</nowiki>accepted<nowiki>}</nowiki>
 
<nowiki>}</nowiki>
 
<nowiki>}</nowiki>

Latest revision as of 15:55, 14 December 2022

Abstract

Diversity in recommendation has been studied extensively. It has been shown that maximizing diversity subject to constrained relevance yields high user engagement over time. Existing work largely relies on setting some attributes that are used to craft an item similarity function and diversify results. In this paper, we examine the question of learning diversity attributes. That is particularly important when users receive recommendations over multiple sessions. We devise two main approaches to look for the best diversity attribute in each session: the first is a generalization of traditional diversity algorithms and the second is based on reinforcement learning. We implement both approaches and run extensive experiments on a semi-synthetic dataset. Our results demonstrate that learning diversity attributes yields a higher overall diversity than traditional diversity algorithms. We also find that training policies using reinforcement learning is more efficient in terms of response time, in particular for high dimensional data.


Bibtex (lrde.bib)

@InProceedings{	  bouarour.22.ieeebigdata,
  author	= {Bouarour, Nassim and Benouaret, Idir and Amer-Yahia,
		  Sihem},
  booktitle	= {2022 IEEE International Conference on Big Data (Big
		  Data)},
  title		= {Learning Diversity Attributes in Multi-Session
		  Recommendations},
  year		= {2022},
  address	= {Osaka, Japan},
  month		= dec,
  abstract	= {Diversity in recommendation has been studied extensively.
		  It has been shown that maximizing diversity subject to
		  constrained relevance yields high user engagement over
		  time. Existing work largely relies on setting some
		  attributes that are used to craft an item similarity
		  function and diversify results. In this paper, we examine
		  the question of learning diversity attributes. That is
		  particularly important when users receive recommendations
		  over multiple sessions. We devise two main approaches to
		  look for the best diversity attribute in each session: the
		  first is a generalization of traditional diversity
		  algorithms and the second is based on reinforcement
		  learning. We implement both approaches and run extensive
		  experiments on a semi-synthetic dataset. Our results
		  demonstrate that learning diversity attributes yields a
		  higher overall diversity than traditional diversity
		  algorithms. We also find that training policies using
		  reinforcement learning is more efficient in terms of
		  response time, in particular for high dimensional data.},
  pages		= {1-10},
  doi		= {10.1109/BigDataXXXX},
  publisher	= {IEEE},
  note		= {accepted}
}