AI Sign Language Translation System
Authors/Creators
Description
This preprint presents a real-time sign language translation system designed to bridge communication gaps between sign language users and non-signers. The system integrates two complementary deep learning approaches: a convolutional neural network (CNN) for image-based gesture classification and a MediaPipe-based multilayer perceptron (MLP) using hand landmark features. The application is deployed as a web-based platform using Flask, enabling real-time video streaming and live gesture recognition via standard webcams. Experimental results demonstrate that the CNN model achieves higher accuracy for complex gestures, while the MLP model provides faster inference and improved computational efficiency. The modular and extensible design allows easy expansion to additional sign vocabularies and supports future research in assistive and accessibility-focused technologies.
Files
AI_Sign_Language_Translation_System (1).pdf
Files
(125.3 kB)
| Name | Size | Download all |
|---|---|---|
|
md5:7f9c24db0e45bbfc067d9a9cdf5e7d8a
|
125.3 kB | Preview Download |
Additional details
Software
- Programming language
- Python