Published March 15, 2024 | Version v1
Conference paper Restricted

Deconstructing Controversial Predictive Technologies for Children in Law Enforcement to identify, understand, and address ethical issues

Description

There is an increasing employment of AI technologies in the civil security sector in the promise of improving efficiency mainly with regard to resource allocation and automatic data analysis. However, the widespread and intrusive uses of AI introduce new challenges, posing threats to fundamental rights and democratic principles (European Parliament, 2021). AI systems may escalate surveillance practices, amplify discriminatory practices and exacerbate pre-existing societal inequalities (e.g., O'neil 2017, Zuboff, 2019). Vulnerable populations, particularly children1, require special attention in this context (Charisi, 2022; Rahman & Keseru, 2021). To raise awareness on how AI can uphold or undermine children lives and rights, in 2021, UNICEF released a policy guidance on AI for children pinpointing how predictive analytics on children can limit their identities and experience of the world. As more decisions regarding children are being taken with the aid of predictive systems (Hall et al. 2023), it becomes important to understand how these technologies are developed, used, and how they might impact children's rights and lives.

Files

Restricted

The record is publicly accessible, but files are restricted to users with access.

Additional details

Funding

popAI – A European Positive Sum Approach towards AI tools in support of Law Enforcement and safeguarding privacy and fundamental rights 101022001
European Commission