Report Open Access
Ostmann, Florian; Dorobantu, Cosmina
Artificial intelligence (AI) plays a central role in current processes of technological change in financial services. Its prominent place on innovation agendas speaks to the significant benefits that AI technologies can enable for firms, consumers, and markets. At the same time, AI systems have the potential to cause significant harm. In light of this fact, recent years have seen a growing recognition of the importance of AI adoption being guided by considerations of responsible innovation.
The aim of this report is to inform and advance the debate about responsible AI in the context of financial services. It provides an introduction to relevant technological concepts, discusses general challenges and guiding principles for the adoption of AI, maps out potential benefits and harms associated with the use of AI in financial services, and examines the fundamental
role of AI transparency in pursuing responsible innovation.
Introduction to AI
The field of AI has a decades-long history and substantial links to statistical methods with
long-standing applications in financial services. The adoption of AI in financial services is underpinned by three distinct elements of innovation: machine learning (ML), non-traditional data, and automation AI systems can combine all three elements or a subset of them. When considering a particular AI use, it is useful to distinguish between these three elements of innovation and examine their respective role. Doing so is crucial for an adequate understanding of AI-related risk, as each element can give rise to distinct challenges.
General challenges and guiding principles for the responsible adoption of AI
ML, non-traditional data, and automation give rise to various challenges for responsible innovation. These challenges provide the foundation for understanding the causes of AI-related risks. They are often related to the following four background considerations:
Against the background of these considerations, AI can give rise to specific concerns. These include concerns about (i) AI systems’ performance, (ii) legal and regulatory compliance, (iii) competent use and adequate human oversight, (iv) firms’ ability to explain decisions made with AI systems to the individuals affected by them, (v) firms’ ability to be responsive to customer requests for information, assistance, or rectification, and (vi) social and economic impacts.
In light of these concerns, recent years have seen a rapidly growing literature on AI ethics principles to guide the responsible adoption of AI. The principle of transparency, in particular, plays a fundamental role. It acts as an enabler for other principles and is a logical first step for considering responsible AI innovation.
Potential AI-related benefits and harms in financial services
The use of AI in financial services can have concrete impacts on consumers and markets that may be relevant from a regulatory and ethical perspective. Areas of impact include consumer protection, financial crime, competition, the stability of firms and markets, and cybersecurity. In each area, the use of AI can lead to benefits as well as harms.
AI transparency and its importance for responsible innovation
The general challenges that AI poses for responsible innovation, combined with the concrete harms that its use in financial services can cause, make it necessary to ensure and to demonstrate that AI systems are trustworthy and used responsibly. AI transparency – the availability of information about AI systems to relevant stakeholders – is crucial in relation to both of these needs.
Information about AI systems can take different forms and serve different purposes. A holistic approach to AI transparency involves giving due consideration to different types of information, different types of stakeholders, and different reasons for stakeholders’ interest in information.
Relevant transparency needs include access to information about an AI system’s logic (system transparency) and information about the processes surrounding the system’s design, development, and deployment (process transparency). For both categories, stakeholders that need access to information can include occupants of different roles within the firm using the AI system
(internal transparency) as well as external stakeholders such as customers and regulators (external transparency).
For system and process transparency alike, there are important questions about how information can be obtained, managed, and communicated in ways that are intelligible and meaningful to different types of stakeholders. Both types of transparency – in their internal as well as their external form – can be equally relevant when it comes to ensuring and demonstrating that applicable concerns are addressed effectively.
In covering these topics, the report provides a comprehensive conceptual framework for examining AI’s ethical implications and defining expectations about AI transparency in the financial services sector. By doing so, it hopes to advance the debate on responsible AI innovation in this crucial domain.