UPDATE: Zenodo migration postponed to Oct 13 from 06:00-08:00 UTC. Read the announcement.

Conference paper Open Access

A Visuo-Haptic Guidance Interface for Mobile Collaborative Robotic Assistant (MOCA)

Lamon, Edoardo; Fusaro, Fabio; Balatti, Pietro; Kim, Wansoo; Ajoudani, Arash

In this work, we propose a novel visuo-haptic guidance interface to enable mobile collaborative robots to follow human instructions in a way understandable by non-experts. The interface is composed of a haptic admittance module and a human visual tracking module. The haptic guidance enables an individual to guide the robot end-effector in the workspace to reach and grasp arbitrary items. The visual interface, on the other hand, uses a real-time human tracking system and enables autonomous and continuous navigation of the mobile robot towards the human, with the ability to avoid static and dynamic obstacles along its path. To ensure a safer human-robot interaction, the visual tracking goal is set outside of a certain area around the human body, entering which will switch robot behaviour to the haptic mode. The execution of the two modes is achieved by two different controllers, the mobile base admittance controller for the haptic guidance and the robot's whole-body impedance controller, that enables physically coupled and controllable locomotion and manipulation. The proposed interface is validated experimentally, where a human-guided robot performs the loading and transportation of a heavy object in a cluttered workspace, illustrating the potential of the proposed Follow-Me interface in removing the external loading from the human body in this type of repetitive industrial tasks. 

Files (28.2 MB)
Name Size
A Visuo-Haptic Guidance Interface for Mobile Collaborative Robotic Assistant (MOCA).pdf
md5:99d03d297fe4c8c3b5dabea0429e0db1
28.2 MB Download
78
232
views
downloads
All versions This version
Views 7878
Downloads 232232
Data volume 6.5 GB6.5 GB
Unique views 6565
Unique downloads 228228

Share

Cite as