Conference paper Open Access
Bujalance Martín, Jesús; Moutarde, Fabien
With the raise of collaborative robots, human-robot interaction needs to be as natural as possible. In this work, we present a framework for real-time continuous motion control of a real collaborative robot (cobot) from gestures captured by an RGB camera. Through deep learning existing techniques, we obtain human skeletal pose information both in 2D and 3D. We use it to design a controller that makes the robot mirror in real-time the movements of a human arm or hand.
Name | Size | |
---|---|---|
article_icvs2019.pdf
md5:b600fd314152abe2f3df6c603ad1b26a |
2.0 MB | Download |
All versions | This version | |
---|---|---|
Views | 39 | 39 |
Downloads | 60 | 60 |
Data volume | 118.4 MB | 118.4 MB |
Unique views | 36 | 36 |
Unique downloads | 57 | 57 |