Jahna Otterbacher
2018-09-14
<p>Journalists and researchers alike have claimed that IR systems are socially biased, returning results to users that perpetuate gender<br>
and racial stereotypes. In this position paper, I argue that IR researchers and in particular, evaluation communities such as CLEF, can and should address such concerns. Using as a guide the Principles for Algorithmic Transparency and Accountability recently put forward by the Association for Computing Machinery, I provide examples of techniques for examining social biases in IR systems and in particular, search engines.</p>
This work has received funding from the European Union's Horizon 2020 Research and Innovation Programme under Grant Agreement No 739578 and the Government of the Republic of Cyprus through the Directorate General for European Programmes, Coordination and Development.
This is a pre-print of an article published in Experimental IR Meets Multilinguality, Multimodality, and Interaction 9th International Conference of the CLEF Association, CLEF 2018, Avignon, France, September 10-14, 2018, Proceedings. The final authenticated version is available online at https://www.springer.com/la/book/9783319989310. © Springer Nature Switzerland AG 2018.
https://doi.org/10.1007/978-3-319-98932-7_11
oai:zenodo.org:2671635
eng
Springer Nature Switzerland AG
https://zenodo.org/communities/rise-teaming-cyprus
https://zenodo.org/communities/eu
info:eu-repo/semantics/openAccess
Creative Commons Attribution Non Commercial No Derivatives 4.0 International
https://creativecommons.org/licenses/by-nc-nd/4.0/legalcode
CLEF 2018, 9th International Conference of the CLEF Association,, ,, Avignon, France, 10-14 September 2018
Social biases
Ranking algorithms
Crowdsourcing
Addressing Social Bias in Information Retrieval
info:eu-repo/semantics/conferencePaper