Conference paper Open Access

A Visuo-Haptic Guidance Interface for Mobile Collaborative Robotic Assistant (MOCA)

Lamon, Edoardo; Fusaro, Fabio; Balatti, Pietro; Kim, Wansoo; Ajoudani, Arash


Dublin Core Export

<?xml version='1.0' encoding='utf-8'?>
<oai_dc:dc xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:oai_dc="http://www.openarchives.org/OAI/2.0/oai_dc/" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://www.openarchives.org/OAI/2.0/oai_dc/ http://www.openarchives.org/OAI/2.0/oai_dc.xsd">
  <dc:creator>Lamon, Edoardo</dc:creator>
  <dc:creator>Fusaro, Fabio</dc:creator>
  <dc:creator>Balatti, Pietro</dc:creator>
  <dc:creator>Kim, Wansoo</dc:creator>
  <dc:creator>Ajoudani, Arash</dc:creator>
  <dc:date>2020-10-31</dc:date>
  <dc:description>In this work, we propose a novel visuo-haptic guidance interface to enable mobile collaborative robots to follow human instructions in a way understandable by non-experts. The interface is composed of a haptic admittance module and a human visual tracking module. The haptic guidance enables an individual to guide the robot end-effector in the workspace to reach and grasp arbitrary items. The visual interface, on the other hand, uses a real-time human tracking system and enables autonomous and continuous navigation of the mobile robot towards the human, with the ability to avoid static and dynamic obstacles along its path. To ensure a safer human-robot interaction, the visual tracking goal is set outside of a certain area around the human body, entering which will switch robot behaviour to the haptic mode. The execution of the two modes is achieved by two different controllers, the mobile base admittance controller for the haptic guidance and the robot's whole-body impedance controller, that enables physically coupled and controllable locomotion and manipulation. The proposed interface is validated experimentally, where a human-guided robot performs the loading and transportation of a heavy object in a cluttered workspace, illustrating the potential of the proposed Follow-Me interface in removing the external loading from the human body in this type of repetitive industrial tasks. </dc:description>
  <dc:identifier>https://zenodo.org/record/4020934</dc:identifier>
  <dc:identifier>10.5281/zenodo.4020934</dc:identifier>
  <dc:identifier>oai:zenodo.org:4020934</dc:identifier>
  <dc:relation>info:eu-repo/grantAgreement/EC/H2020/871237/</dc:relation>
  <dc:relation>doi:10.5281/zenodo.4020933</dc:relation>
  <dc:relation>url:https://zenodo.org/communities/h2020-sophia</dc:relation>
  <dc:rights>info:eu-repo/semantics/openAccess</dc:rights>
  <dc:rights>https://creativecommons.org/licenses/by/4.0/legalcode</dc:rights>
  <dc:title>A Visuo-Haptic Guidance Interface for Mobile Collaborative Robotic Assistant (MOCA)</dc:title>
  <dc:type>info:eu-repo/semantics/conferencePaper</dc:type>
  <dc:type>publication-conferencepaper</dc:type>
</oai_dc:dc>
6
5
views
downloads
All versions This version
Views 66
Downloads 55
Data volume 140.9 MB140.9 MB
Unique views 55
Unique downloads 33

Share

Cite as