Deconstructing Controversial Predictive Technologies for Children in Law Enforcement to identify, understand, and address ethical issues
Creators
- 1. Open University
- 2. Trilateral Research
Description
There is an increasing employment of AI technologies in the civil security sector in the promise of improving efficiency mainly with regard to resource allocation and automatic data analysis. However, the widespread and intrusive uses of AI introduce new challenges, posing threats to fundamental rights and democratic principles (European Parliament, 2021). AI systems may escalate surveillance practices, amplify discriminatory practices and exacerbate pre-existing societal inequalities (e.g., O'neil 2017, Zuboff, 2019). Vulnerable populations, particularly children1, require special attention in this context (Charisi, 2022; Rahman & Keseru, 2021). To raise awareness on how AI can uphold or undermine children lives and rights, in 2021, UNICEF released a policy guidance on AI for children pinpointing how predictive analytics on children can limit their identities and experience of the world. As more decisions regarding children are being taken with the aid of predictive systems (Hall et al. 2023), it becomes important to understand how these technologies are developed, used, and how they might impact children's rights and lives.