FairUMAP 2020: The 3rd Workshop on Fairness in User Modeling, Adaptation and Personalization

The 3rd FairUMAP workshop brings together researchers working at the intersection of user modeling, adaptation, and personalization on the one hand, and bias, fairness and transparency in algorithmic systems on the other hand.


INTRODUCTION
User modeling and personalized recommendations, often enabled by data-rich machine learning, are key enabling technologies that allow intelligent systems to learn from users, adapting their output to users' needs and preferences. These techniques have become an essential part of systems that help users find relevant content in today's highly complex, information-rich environments. However, there has been a growing recognition that they raise novel ethical, policy, and legal challenges. It has become apparent that a singleminded focus on the user preferences has obscured other important and beneficial outcomes such systems must be able to deliver. System properties such as fairness, transparency, balance, openness to diversity, and other social welfare considerations, are not captured by typical metrics, based on which data-driven personalized Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for third-party components of this work must be honored. For all other uses, contact the owner/author(s). UMAP '20, July 14-17, 2020, Genoa, Italy © 2020 Copyright held by the owner/author(s). ACM ISBN 978-1-4503-6861-2/20/07. https://doi.org/10.1145/3340631.3398671 models are optimized. Indeed, widely used personalization methods in popular sites such as Facebook, Google News and YouTube, have been heavily criticized for personalizing information delivery too heavily at the cost of other objectives. This workshop brings together researchers working at the intersection of user modeling, adaptation, and personalization on one hand, and bias, fairness and transparency in algorithmic systems on the other hand.

Objectives and Topics of Interest
We invited papers on a range of topics including: Bias and discrimination in user modeling, personalization and recommendation; algorithms for fairness-aware personalization; Definitions, metrics and criteria for optimizing and evaluating fairness-related aspects of personalized systems; Data preprocessing and transformation methods to address bias in training data; User modeling approaches that take fairness and bias into account; User studies to evaluate impact of personalization on fairness, balance, diversity, and other social welfare criteria; Balancing needs of multiple stakeholders in recommender systems and other personalized systems; and 'Filter bubble' or 'balkanization' effects of personalization.

WORKSHOP PROGRAM AND ORGANIZATION
FairUMAP 2020 followed a peer review process for paper acceptance. At least two program committee members reviewed each submission. Five papers were accepted for presentation. 1

Accepted Papers
Spenelli and Crovella [5] present an empirical exploration of the nature of YouTube recommendations, concentrating on socially impactful dimensions. Their results show initial evidence that YouTube's recommendations are exposing extreme and unscientific viewpoints and that there is a fundamental tension between user privacy and extreme recommendations. Following a similar approach, Kyriakou, Kleanthous, Otterbacher and Papadopoulos [2], analyzed popular Image Tagging Cognitive Services that infer emotion from a person's face, considering whether they perpetuate racial and gender stereotypes. The authors examined the descriptions on a set of controlled images, produced by both Cognitive Services and crowdworkers, finding initial evidence that Cognitive Services can perpetuate stereotypes.
Rastegarpanah, Crovella and Gummadi [3] looked into the fairness notions for algorithmic decision-making systems. The authors demonstrate that these systems should expand to incorporate the inputs used by a system. Deshpande, Foulds and Pan [1] studied the effects of sociolinguistic bias on resume-to-job-description matching algorithms and developed a fairness-aware matching algorithm. Smets, Walravens and Ballon [4] considered the challenges of developing personalized public services. The authors explain how optimizing for the common good, requires moving away from a purely personalized-oriented approach and into two best practices in the design of digital public services: participatory design and open data.

Organization Committee
Bamshad Mobasher is professor of Computer Science and the director of the Center for Web Intelligence at DePaul University in Chicago. His research areas include Web mining, personalization, and recommender systems. Mobasher serves on the steering committees for ACM RecSys and ACM UMAP conferences, on the editorial board of User Modeling and User-Adapted Interaction, and as associate editor for the ACM Trans. on the Web, the ACM Trans. on Interactive Intelligent Systems, and ACM Trans. on Internet Technologies.
Styliani Kleanthous is a senior researcher in the Cyprus Center for Algorithmic Transparency (CyCAT) Open University of Cyprus and RISE Research Centre, Cyprus. Styliani's main research interests are in the area of Community/group Modeling. She specializes in exploiting psychological and social theories for modeling user preferences, for designing intelligent interaction and adaptive user support. Styliani is working on modeling social interaction within 3D Virtual Worlds in several contexts (e.g. learning, collaboration, socialisation), with emphasis on the psychological and social factors that underpin these interactions.
Bettina Berendt is a professor in the Faculty of Electrical Engineering and Computer Science at Technische Universität Berlin, Director of the Weizenbaum Institute for the Networked Society, Berlin, and guest professor at the Department of Computer Science at KU Leuven. Her research interests are data and text mining and in particular the interactions with how people make decisions faced with the artificial and human intelligence, and privacy-and fairness-related as well as other ethical aspects of these situations. She is an inaugural member of the steering committee for the Conference on Fairness, Accountability, and Transparency (FAccT) and has been a co-organizer of previous FairUMAP workshops and of Deconstructing FAT at ACM FAT* 2020.
Michael Ekstrand is assistant professor of Computer Science and co-director of the People and Information Research Team at Boise State University. His research examines the human dimensions of modern information access systems systems, particularly recommender systems. His publications include papers at FAccT (formerly FAT*) and RecSys, and a tutorial presented at SIGIR 2019 and RecSys 2019 on fairness in information access. He co-organized the 2017 and 2018 FATREC Workshops on Responsible Recommendation at RecSys, the Fairness Track in TREC 2019-2020, and the FACTS-IR workshop at SIGIR 2019. He also serves on the FAccT Executive Committee and co-chairs the FAccT Network.
Jahna Otterbacher is an assistant professor in the School of Pure and Applied Sciences at the Open University of Cyprus. Jahna also holds an appointment as team leader of the Transparency in Algorithms Group at RISE (Research centre on Interactive media, Smart systems and Emerging technologies). She is the Coordinator of the H2020 project "Cyprus Center for Algorithmic Transparency, " which investigates social and cultural biases in information access systems as well as their origins in data, development processes, and user behaviors.