Published October 7, 2022 | Version Published
Journal article Open

What's wrong with "AI ethics" narratives

  • 1. Università di Pisa

Description

Machine Learning (ML) systems are widely used to make decisions that affect people’s lives. Voices, faces, and emotions are classified, lives are depicted by automated statistical models and on the basis of this, decisions are made such as whether someone should be freed from or detained in prison, hired for or fired from a job, admitted to or rejected from a college or granted or denied a loan.

Certainly, basing such decisions on ML systems– which trace correlations of any kind, having no access to meaning or context– exposes people to all sorts of discrimination, abuse, and harm, since ML systems cannot identify a person's character or predict his or her future actions any better than astrology can. Large technology corporations have responded to the vast evidence of the harm and injustice generated by algorithmic decision-making with a strategy similar to that already employed by Big Tobacco, i.e., the funding of research and academic study with the function of legitimizing and ensuring that the results, the theoretical framing of the research, and even the tone, are consistent with their business model.

The family of narratives deliberately spread by tech giants– called “AI ethics”– removes the idea that labeling people as things and treating them as such is tantamount to denying them the recognition of any rights, infallibly harming weaker individuals, and thus, it should be banned.

Instead of simply refusing automated statistical decisions, they present AI ethics as a matter of algorithmic fairness and value alignment, as though the only problem were single, amendable biases; as though algorithms could be equipped with the human skills required to make moral judgments; as though the moral values embedded in ML systems could be simply chosen by engineers and translated into computational terms.

Thus, “AI ethics” narratives are based on imposture and mystification:  on a false narrative – which exploits three fundamental features of magical thinking – about what machine learning systems are and are not capable of actually doing, and on a misconception of ethics.

Taken seriously, AI ethics would require artificial general intelligence (AGI).

In absence of AGI, algorithmic fairness and value alignment cannot be anything more than cargo cult ethics and ethics washing, i.e. a tool of distraction to avoid legal regulation.

The distortion of ethics, which frames AI ethics in the deterministic logic of the fait accompli, has an anti-democratic nature much like any other pretense designed for the sake of power.

It is a mystification whereby public issues of structural injustice, whose solution would be very costly for tech giants, are substituted by science fiction, and law is replaced with industry self-regulation. Turning concrete issues into abstract and empty statements, collective issues into individual duties, and political issues into technical ones, tech giants succeeds in evading democratic control and legal regulation. It leads one to believe that moral questions about the deployment of AI amount to an esoteric doctrine, a matter of trolley dilemmas and advanced mathematics, to be delegated to specialists and engineers and solved by technical adjustments, rather than a matter of monopolistic powers which should be addressed by legal tools.

Thus, “AI ethics” narratives achieve the goal, cherished by public and private oligarchies, of neutralizing social conflict by replacing political struggle with the promise of technology.

Once the mystification of “AI ethics” narratives is unveiled, a Pandora’s box will be opened of all moral questions posed by intellectual monopoly capitalism, from overcollection of personal data to exploitation, expropriation and de-humanization. It will be clear then that legal intervention is required and probably, in order to achieve it, social conflict.

Notes

Reference: Daniela Tafani, What's wrong with "AI ethics" narratives, in «Bollettino telematico di filosofia politica», 2022, pp. 1-22, https://commentbfp.sp.unipi.it/daniela-tafani-what-s-wrong-with-ai-ethics-narratives

Files

Daniela Tafani - What's wrong with AI ethics narratives - Bollettino telematico di filosofia politica - 2022.pdf