Published June 1, 2025 | Version v1
Journal article Open

Comprehensive Review of AI Hallucinations: Impacts and Mitigation Strategies for Financial and Business Applications

  • 1. ROR icon Touro College

Description

This paper investigates the causes, implications, and mitigation strategies of AI hallucinations, with a focus on generative AI systems. This paper examines the phenomenon of AI hallucinations in large language models, analyzing root causes and evaluating mitigation strategies. We synthesize insights from recent academic research and industry findings to explain how hallucinations often arise due to problems in the data used to train language models, limitations in model architecture, and the way large language models (LLMs) generate text. Through a systematic review of current literature, we identify key patterns in how hallucinations emerge and examine the growing concern about their impact as AI becomes more embedded in decision-making systems. We identify core contributors such as data quality issues, model complexity, lack of grounding, and limitations inherent in the generative process. The risks are examined in various domains, including legal, business, and user-facing applications, highlighting consequences like misinformation, trust erosion, and productivity loss. To address these challenges, we survey mitigation techniques including data curation, retrieval-augmented generation (RAG), prompt engineering, fine-tuning, multi-model systems, and human-in-the-loop oversight. Our analysis draws from a wide range of academic and industry sources, offering both theoretical understanding and practical insights for AI practitioners. This is a pure review paper and all results are from the cited references.

Files

ijcatr14061003.pdf

Files (331.6 kB)

Name Size Download all
md5:3d68a3cc093c66aab1731ce1e054f888
331.6 kB Preview Download

Additional details

Dates

Available
2025-06-01