Combined Convolutional Neural Network (CNN) and Keypoint-based Method for Recognizing Finger Interaction Intent States
Authors/Creators
Description
This work addresses the challenge of developing intuitive and accessible control systems for upper-limb prostheses with particular emphasis on pediatric applications. Conventional approaches - such as myoelectric and mechanical control often suffer from limitations related to signal stability, usability or reliability. The proposed hybrid method introduces computer vision as an alternative command channel, enabling vision-based prosthetic control driven by the interpretation of user intention, without relying on wearable bioelectrical or tactile sensors. Intention states are inferred from observed hand interactions using a contact-based labeling logic and a combined CNN and keypoint-based framework. A hardware-software prototype was implemented using a microcontroller platform and a 3D-printed robotic finger, in which a vision-based AI model performs real-time intention-state classification and generates corresponding actuation commands. A preliminary cost and feasibility assessment indicates the potential suitability of the proposed approach for cost-sensitive assistive devices. Considering the increasing number of individuals injured as a result of russian military aggression, this solution provides a highly relevant and socially significant assistive mechanism for both civilians and military personnel. Additionally, this system holds significant potential for users with congenital limb malformations, facilitating a more active lifestyle via responsive robotic prosthetic fingers. Consequently, the proposed system represents a practical and scalable technology that should be made accessible to every individual in need.
Files
140-148.pdf
Files
(838.3 kB)
| Name | Size | Download all |
|---|---|---|
|
md5:6ae45525ed7714f0164d50662a79020a
|
838.3 kB | Preview Download |