Legal and Criminological Challenges of AI-Driven Cybercrime: Regulating Deepfakes, Voice Cloning, and Automated Fraud in the Digital Age
Authors/Creators
Description
Abstract:
Rapid developments in recent years in the field of Artificial Intelligence (AI) have transformed how digital content is created, shared and consumed. While AI offers massive benefits for innovation, it has also enabled new forms of cybercrime including deepfakes, voice cloning and automated fraud. Deepfakes AI-generated synthetic media that replicate a person’s likeness or voice have become a serious threat to privacy, reputation and public trust. They are being used for misinformation, financial fraud and online harassment which hovering major legal and ethical concerns.
These emerging crimes challenge traditional criminal justice systems, as the existing cyber laws are often inadequate to address such technologically advanced offences. Law enforcement agencies also face difficulties in detecting, proving and prosecuting deepfake related crimes due to limited technical capacity and a lack of clear legal frameworks.
This study aims to explore these challenges from a legal and criminological perspective focusing on how victims are affected, how law enforcement agencies respond and how AI itself can be used as a tool to detect and prevent such crimes
Files
Additional details
Related works
- Cites
- Report: 10.5281/zenodo.18796965 (DOI)
References
- Abdul Rahman Karim. (2026). Legal and Criminological Challenges of AI-Driven Cybercrime (1). https://doi.org/10.5281/zenodo.18796965