Conference paper Open Access
Monivar, Pablo Vicente; Manitsaris, Sotiris; Glushkova, Alina
<?xml version='1.0' encoding='utf-8'?> <resource xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns="http://datacite.org/schema/kernel-4" xsi:schemaLocation="http://datacite.org/schema/kernel-4 http://schema.datacite.org/meta/kernel-4.1/metadata.xsd"> <identifier identifierType="URL">https://zenodo.org/record/5658483</identifier> <creators> <creator> <creatorName>Monivar, Pablo Vicente</creatorName> <givenName>Pablo Vicente</givenName> <familyName>Monivar</familyName> <affiliation>Centre for Robotics, MINES ParisTech, PSL Université</affiliation> </creator> <creator> <creatorName>Manitsaris, Sotiris</creatorName> <givenName>Sotiris</givenName> <familyName>Manitsaris</familyName> <affiliation>Centre for Robotics, MINES ParisTech, PSL Université</affiliation> </creator> <creator> <creatorName>Glushkova, Alina</creatorName> <givenName>Alina</givenName> <familyName>Glushkova</familyName> <affiliation>Centre for Robotics, MINES ParisTech, PSL Université</affiliation> </creator> </creators> <titles> <title>Towards a Professional Gesture Recognition with RGB-D from Smartphone</title> </titles> <publisher>Zenodo</publisher> <publicationYear>2020</publicationYear> <dates> <date dateType="Issued">2020-03-30</date> </dates> <resourceType resourceTypeGeneral="ConferencePaper"/> <alternateIdentifiers> <alternateIdentifier alternateIdentifierType="url">https://zenodo.org/record/5658483</alternateIdentifier> </alternateIdentifiers> <relatedIdentifiers> <relatedIdentifier relatedIdentifierType="DOI" relationType="IsIdenticalTo">10.1007/978-3-030-34995-0_22</relatedIdentifier> <relatedIdentifier relatedIdentifierType="URL" relationType="IsPartOf">https://zenodo.org/communities/collaborate_project</relatedIdentifier> <relatedIdentifier relatedIdentifierType="URL" relationType="IsPartOf">https://zenodo.org/communities/mingei-h2020</relatedIdentifier> </relatedIdentifiers> <rightsList> <rights rightsURI="https://creativecommons.org/licenses/by/4.0/legalcode">Creative Commons Attribution 4.0 International</rights> <rights rightsURI="info:eu-repo/semantics/openAccess">Open Access</rights> </rightsList> <descriptions> <description descriptionType="Abstract"><p>Abstract. The goal of this work is to build the basis for a smartphone application that provides functionalities for recording human motion data, train machine learning algorithms and recognize professional gestures. First, we take advantage of the new mobile phone cameras, either infrared or stereoscopic, to record RGB-D data. Then, a bottom-up pose estimation algorithm based on Deep Learning extracts the 2D human skeleton and exports the 3rd dimension using the depth. Finally, we use a gesture recognition engine, which is based on K-means and Hidden Markov Models (HMMs). The performance of the machine learning algorithm has been tested with professional gestures using a silk-weaving and a TV-assembly datasets.</p></description> </descriptions> <fundingReferences> <fundingReference> <funderName>European Commission</funderName> <funderIdentifier funderIdentifierType="Crossref Funder ID">10.13039/100010661</funderIdentifier> <awardNumber awardURI="info:eu-repo/grantAgreement/EC/H2020/822336/">822336</awardNumber> <awardTitle>Representation and Preservation of Heritage Crafts</awardTitle> </fundingReference> <fundingReference> <funderName>European Commission</funderName> <funderIdentifier funderIdentifierType="Crossref Funder ID">10.13039/100010661</funderIdentifier> <awardNumber awardURI="info:eu-repo/grantAgreement/EC/H2020/820767/">820767</awardNumber> <awardTitle>Co-production CeLL performing Human-Robot Collaborative AssEmbly</awardTitle> </fundingReference> </fundingReferences> </resource>
Views | 57 |
Downloads | 57 |
Data volume | 144.5 MB |
Unique views | 45 |
Unique downloads | 55 |