Time-dependent Evaluation of Recommender Systems


Journal article


Teresa Scheidt, Joeran Beel
Perspectives@RecSys, 2021

Semantic Scholar DBLP
Cite

Cite

APA   Click to copy
Scheidt, T., & Beel, J. (2021). Time-dependent Evaluation of Recommender Systems. Perspectives@RecSys.


Chicago/Turabian   Click to copy
Scheidt, Teresa, and Joeran Beel. “Time-Dependent Evaluation of Recommender Systems.” Perspectives@RecSys (2021).


MLA   Click to copy
Scheidt, Teresa, and Joeran Beel. “Time-Dependent Evaluation of Recommender Systems.” Perspectives@RecSys, 2021.


BibTeX   Click to copy

@article{teresa2021a,
  title = {Time-dependent Evaluation of Recommender Systems},
  year = {2021},
  journal = {Perspectives@RecSys},
  author = {Scheidt, Teresa and Beel, Joeran}
}

Abstract

Evaluation of recommender systems is an actively discussed topic in the recommender system community. However, some aspects of evaluation have received little to no attention, one of them being whether evaluating recommender system algorithms with single-number metrics is sufficient. When presenting results as a single number, the only possible assumption is a stable performance over time regardless of changes in the datasets, while it intuitively seems more likely that the performance changes over time. We suggest presenting results over time, making it possible to identify trends and changes in performance as the dataset grows and changes. In this paper, we conduct an analysis of 6 algorithms on 10 datasets over time to identify the need for a time-dependent evaluation. To enable this evaluation over time, we split the datasets based on the provided timesteps into smaller subsets. At every tested timepoint we use all available data up to this timepoint, simulating a growing dataset as encountered in the realworld. Our results show that for 90% of the datasets the performance changes over time and in 60% even the ranking of algorithms changes over time.

Keywords: Recommender Systems, Evaluation, Time-dependent Evaluation


Share