INDIAN SIGN LANGUAGE RECOGNITION USING YOLOV5
Abstract
In our rapidly advancing technological era, characterized by the ubiquity of home automation and a demand for streamlined solutions, a project unfolds with the mission to address communication challenges for individuals with hearing and speech impairments. Sign language, a vital mode of expression for the deaf and mute, forms the focal point of this initiative. Utilizing sophisticated Deep Learning algorithms including YOLOv5 the project aims to analyze and interpret sign language gestures from input images. The ultimate goal is to seamlessly translate these gestures into text and, subsequently, into audio, thereby providing an encompassing communication solution. A diverse dataset, encompassing English letters, numbers, and words, enhances the system’s proficiency. This endeavor not only embraces technological progress but, more importantly, champions inclusivity by breaking down communication barriers for those who have long faced challenges in expressing themselves effectively.
Files
Nacore 24 P207.pdf
Files
(1.1 MB)
| Name | Size | Download all |
|---|---|---|
|
md5:9ea7f58c9ac0684b993127c9f41c3ab8
|
1.1 MB | Preview Download |