Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Computational Versus Perceived Popularity Miscalibration in Recommender Systems
Johannes Kepler University Linz, Austria; Linz Institute of Technology, Linz, Austria.
Johannes Kepler University Linz, Linz, Austria.
Polytechnic University of Bari, Bari, Italy.
Jönköping University, School of Engineering, JTH, Department of Computer Science and Informatics.ORCID iD: 0000-0003-4344-9986
Show others and affiliations
2023 (English)In: SIGIR '23: Proceedings of the 46th International ACM SIGIR Conference on Research and Development in Information Retrieval, Association for Computing Machinery (ACM), 2023, p. 1889-1893Conference paper, Published paper (Refereed)
Abstract [en]

Popularity bias in recommendation lists refers to over-representation of popular content and is a challenge for many recommendation algorithms. Previous research has suggested several offline metrics to quantify popularity bias, which commonly relate the popularity of items in users’ recommendation lists to the popularity of items in their interaction history. Discrepancies between these two factors are referred to as popularity miscalibration. While popularity metrics provide a straightforward and well-defined means to measure popularity bias, it is unknown whether they actually reflect users’ perception of popularity bias.

To address this research gap, we conduct a crowd-sourced user study on Prolific, involving 56 participants, to (1) investigate whether the level of perceived popularity miscalibration differs between common recommendation algorithms, (2) assess the correlation between perceived popularity miscalibration and its corresponding quantification according to a common offline metric. We conduct our study in a well-defined and important domain, namely music recommendation using the standardized LFM-2b dataset, and quantify popularity miscalibration of five recommendation algorithms by utilizing Jensen-Shannon distance (JSD). Challenging the findings of previous studies, we observe that users generally do perceive significant differences in terms of popularity bias between algorithms if this bias is framed as popularity miscalibration. In addition, JSD correlates moderately with users’ perception of popularity, but not with their perception of unpopularity.

Place, publisher, year, edition, pages
Association for Computing Machinery (ACM), 2023. p. 1889-1893
Keywords [en]
popularity bias, popularity calibration, music recommendation, miscalibration, ecological validity, metrics, recommender systems
National Category
Computer and Information Sciences
Identifiers
URN: urn:nbn:se:hj:diva-62227DOI: 10.1145/3539618.3591964Scopus ID: 2-s2.0-85168679400ISBN: 978-1-4503-9408-6 (electronic)OAI: oai:DiVA.org:hj-62227DiVA, id: diva2:1789931
Conference
46th International ACM SIGIR Conference on Research and Development in Information Retrieval, July 23–27, 2023, Taipei, Taiwan
Available from: 2023-08-21 Created: 2023-08-21 Last updated: 2023-11-20Bibliographically approved

Open Access in DiVA

No full text in DiVA

Other links

Publisher's full textScopus

Authority records

Ferwerda, Bruce

Search in DiVA

By author/editor
Ferwerda, Bruce
By organisation
JTH, Department of Computer Science and Informatics
Computer and Information Sciences

Search outside of DiVA

GoogleGoogle Scholar

doi
isbn
urn-nbn

Altmetric score

doi
isbn
urn-nbn
Total: 46 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf