Published January 4, 2026 | Version v1
Journal article Open

BIAS AND FAIRNESS IN ARTIFICIAL INTELLIGENCE APPLICATIONS WITHIN THE FINANCIAL SECTOR

Description

AI-powered automation of credit evaluation, fraud detection, and investment decision-making is revolutionizing
the financial sector. Nevertheless, the widespread use of AI in finance is also raising considerable concerns
about bias and fairness. Bias is when algorithms, which have been trained on biased or unbalanced data, perform
in an unfair or discriminatory manner. In finance, this means that some people would have a limited credit
access, make unjustified loan rejections, or a particular demographic group would be excessively over-flagged
for triggering of transactions.
These biases are very often invisible and can come from biased datasets, improperly chosen features, or
implicitly from human decisions taken during the designing of models. Once such AI is released, it incites
feedback loops that may deepen the original inequalities, damage the principle of financial inclusion, and
decrease the level of trust of people in the system.
To tackle such issues, companies have decided to incorporate data practices that are sensitive to fairness,
perform algorithmic audits, and use explainable AI models like LIME and SHAP. Governance frameworks
based on ethics and worldwide rules such as the EU AI Act, which set transparency and accountability as
prerequisites, are becoming the norm.
In the end, it is the use of AI in a morally right way that calls for a proper balancing of ethics and innovation. By
commitment to fairness, constant oversight, and a multitude of voices in the design of AI, financial institutions
will be able to deploy technology that will be a tool for equity, trust, and growth that is sustainable in the long
run.

Files

JAN09.pdf

Files (205.3 kB)

Name Size Download all
md5:c2126c70eaffdfceefdf14415201480c
205.3 kB Preview Download