Published April 16, 2026 | Version v1
Journal article Open

AI-Based Live Translation and Dubbing Systems for Multilingual Video Content

Description

The language diversity in India is a big challenge when it comes to access to digital video content in various geographical areas. News, educational materials, and multimedia materials are also usually readable only in a particular language, which restricts their access to a greater audience. The recent developments in the field of Artificial Intelligence allowed automated speech recognition, machine translation, text-to-speech synthesis, and synchronization of lips, which means that multilingual video translation is now possible. In this review paper, we will discuss the current AI-based systems and techniques that are in use to provide live translation and dubbing of video materials. The paper evaluates speech-to-text solutions, neural machine translation systems, voice synthesis systems, subtitle generation systems, and lip- sync systems. A comparative analysis of the current solutions reveals its advantages and constraints especially with regards to Indian languages. The review outlines the major gaps in the research, which are absence of integrated systems, and offline limitations as well as difficulties in processing in real time. On the basis of these observations, the current paper addresses the necessity of an interdisciplinary AI-based structure to enhance access, efficiency, and scalability of multilingual video translators systems.

Files

CSE17-ICETIS-2026 AI-Based Live Translation.pdf

Files (513.3 kB)

Name Size Download all
md5:7d808e1ed1af7a3864f663aa15cb9963
513.3 kB Preview Download

Additional details

References