FairUMAP 2021: The 4th Workshop on Fairness in User Modeling, Adaptation and Personalization

ACM Reference Format: Bamshad Mobasher, Styliani Kleanthous, Bettina Berendt, Jahna Otterbacher, Tsvi Kuflik, and Avital Shulner Tal. 2021. FairUMAP 2021: The 4th Workshop on Fairness in User Modeling, Adaptation and Personalization. In Adjunct Proceedings of the 29th ACM Conference on User Modeling, Adaptation and Personalization (UMAP ’21 Adjunct), June 21–25, 2021, Utrecht, Netherlands. ACM, New York, NY, USA, 2 pages. https://doi.org/10.1145/ 3450614.3461454


INTRODUCTION
User modeling and personalized recommendations, often enabled by data-rich machine learning, are key enabling technologies that allow intelligent systems to learn from users, adapting their output to users' needs and preferences. These techniques have become an essential part of systems that help users find relevant content in today's highly complex, information-rich environments. However, there has been a growing recognition that they raise novel ethical, policy, and legal challenges. It has become apparent that a singleminded focus on the user preferences has obscured other important and beneficial outcomes such systems must be able to deliver. System properties such as fairness, transparency, balance, openness to diversity, and other social welfare considerations, are not captured by typical metrics, based on which data-driven personalized models are optimized.
User modeling and personalized recommendations, often enabled by data-rich machine learning, are key enabling technologies that allow intelligent systems to learn from users, adapting their output to users' needs and preferences. These techniques have become an essential part of systems that help users find relevant Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for third-party components of this work must be honored. For all other uses, contact the owner/author(s). UMAP '21 Adjunct, June 21-25, 2021, Utrecht, Netherlands © 2021 Copyright held by the owner/author(s). ACM ISBN 978-1-4503-8367-7/21/06. https://doi.org/10.1145/3450614.3461454 content in today's highly complex, information-rich environments. However, there has been a growing recognition that they raise novel ethical, policy, and legal challenges. It has become apparent that a single-minded focus on the user preferences has obscured other important and beneficial outcomes such systems must be able to deliver. System properties such as fairness, transparency, balance, openness to diversity, and other social welfare considerations, are not captured by typical metrics, based on which data-driven personalized models are optimized.
In this half day workshop we invited papers on a range of topics including: Bias and discrimination in user modeling, personalization and recommendation; Computational techniques and algorithms for fairness-aware personalization; Definitions, metrics and criteria for optimizing and evaluating fairness-related aspects of personalized systems; Data preprocessing and transformation methods to address bias in training data User modeling approaches that take fairness and bias into account; User studies to evaluate impact of personalization on fairness, balance, diversity, and other social welfare criteria; Balancing needs of multiple stakeholders in recommender systems and other personalized systems;

WORKSHOP PROGRAM AND ORGANIZATION
FairUMAP 2021 followed a peer review process for paper acceptance. At least two program committee members reviewed each submission. Five papers were accepted for presentation at the online event.
The contributed papers cover a wide range of the FairUMAP scope: Ahnert et al. suggested the FairCeptron framework (available at GITHUB), an approach for studying perceptions of fairness in algorithmic decision making such as in ranking or classification that supports (i) studying human perceptions of fairness and (ii) comparing these human perceptions with measures of algorithmic fairness. Giunchiglia et al. presented an exploratory work in which they put forward the notion of transparency paths, a process by which people document their position, choices and perceptions when developing and/or using algorithmic platforms. Simko et al. presented a position paper about the need for continuous independent auditing for eliminating the spread of misinformation, especially given the dynamic and evolving communication in these networks. Hu et al. presented a survey of recent XAI studies, looking into the purpose of explanation (how and why explanations are provided), interpretation methods, the context of explanations, their format, their domain, and the stakeholders involved. Finally, Schelenz presents the need for users' diversity in recommender systems. Her paper connects fairness to the diversity literature in the field of recommender system, specifies the tension between item-side and user-side fairness by revealing a bias in the treatment of user diversity and proposes solutions to mitigate the bias by drawing on Black feminist and critical race theory. Tsvi Kuflik is a professor and former head of the Information Systems Department at the University of Haifa, Israel.

Program Committee
Avital Shulner Tal is a junior researcher and PhD student in the Information Systems Department at the University of Haifa, Israel.