Published December 2, 2025 | Version v1
Journal article Open

FRAUD AND DISINFORMATION PREVENTION – FINANCIAL INSTITUTIONS AND NEWS OUTLETS USING AI TO VERIFY THE AUTHENTICITY OF MEDIA (SPOTTING AI-DOCTORED VOICES OR VIDEOS IN SCAMS AND ELECTIONS), THEREBY PRESERVING TRUST AND SAFETY

Authors/Creators

Description

The emergence of artificial intelligence (AI) has posed some tremendous challenges in fraud and disinformation
detection and deterrence, especially in financial organizations and media houses. With the advent of AI technologies
such as deep fakes, voice synthesis and manipulated video content becoming more and more advanced, the necessity
of a well-developed system of media verification has never been so acute. The present paper discusses the purpose of
AI in authenticating the authenticity of media with reference to its use to promote financial fraud prevention and
counter electoral disinformation. The paper is a review of the existing AI technology employed by financial institutions
to implicit frauds, such as AI voice and face recognition, and AI-based deepfake detection technologies used by news
outlets to maintain media integrity. The results imply that although AI is an important factor to protect trust, there are
issues regarding its accuracy, scale, and ethics. The paper is summarized in the conclusion with a highlight on the
possible use of AI in authenticity and transparency, in addition to demanding further innovation and regulation to
counter any emerging threat in the fast changing digital environment.

Files

DEC06.pdf

Files (281.6 kB)

Name Size Download all
md5:25fe05c6971d5ddc8fd274b25a86c0ca
281.6 kB Preview Download

Additional details