Journal article Open Access

Deep Motifs and Motion Signatures

Aristidou Andreas; Cohen-Or Daniel; Hodgins Jessica K; Chrysanthou Yiorgos; Shamir Ariel


DataCite XML Export

<?xml version='1.0' encoding='utf-8'?>
<resource xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns="http://datacite.org/schema/kernel-4" xsi:schemaLocation="http://datacite.org/schema/kernel-4 http://schema.datacite.org/meta/kernel-4.1/metadata.xsd">
  <identifier identifierType="URL">https://zenodo.org/record/2658798</identifier>
  <creators>
    <creator>
      <creatorName>Aristidou Andreas</creatorName>
      <affiliation>The Interdisciplinary Centre</affiliation>
    </creator>
    <creator>
      <creatorName>Cohen-Or Daniel</creatorName>
      <affiliation>Tel-Aviv University</affiliation>
    </creator>
    <creator>
      <creatorName>Hodgins Jessica K</creatorName>
      <affiliation>Carnegie Mellon University</affiliation>
    </creator>
    <creator>
      <creatorName>Chrysanthou Yiorgos</creatorName>
      <affiliation>Research Centre on Interactive Media Smart Systems and Emerging Technologies</affiliation>
    </creator>
    <creator>
      <creatorName>Shamir Ariel</creatorName>
      <affiliation>The Interdisciplinary Centre</affiliation>
    </creator>
  </creators>
  <titles>
    <title>Deep Motifs and Motion Signatures</title>
  </titles>
  <publisher>Zenodo</publisher>
  <publicationYear>2018</publicationYear>
  <subjects>
    <subject>Motion capture</subject>
    <subject>Motion processing</subject>
    <subject>Animation</subject>
    <subject>Motion Word</subject>
    <subject>Motif</subject>
    <subject>Motion Signature</subject>
    <subject>Convolutional Network,</subject>
    <subject>Triplet Loss</subject>
  </subjects>
  <dates>
    <date dateType="Issued">2018-11-01</date>
  </dates>
  <language>en</language>
  <resourceType resourceTypeGeneral="Text">Journal article</resourceType>
  <alternateIdentifiers>
    <alternateIdentifier alternateIdentifierType="url">https://zenodo.org/record/2658798</alternateIdentifier>
  </alternateIdentifiers>
  <relatedIdentifiers>
    <relatedIdentifier relatedIdentifierType="DOI" relationType="IsIdenticalTo">10.1145/3272127.3275038</relatedIdentifier>
    <relatedIdentifier relatedIdentifierType="URL" relationType="IsPartOf">https://zenodo.org/communities/rise-teaming-cyprus</relatedIdentifier>
  </relatedIdentifiers>
  <version>Published</version>
  <rightsList>
    <rights rightsURI="http://creativecommons.org/licenses/by-nc-nd/4.0/legalcode">Creative Commons Attribution Non Commercial No Derivatives 4.0 International</rights>
    <rights rightsURI="info:eu-repo/semantics/openAccess">Open Access</rights>
  </rightsList>
  <descriptions>
    <description descriptionType="Abstract">&lt;p&gt;Many analysis tasks for human motion rely on high-level similarity between sequences of motions, that are not an exact matches in joint angles, timing, or ordering of actions. Even the same movements performed by the same person can vary in duration and speed. Similar motions are characterized by similar sets of actions that appear frequently. In this paper we introduce motion motifs and motion signatures that are a succinct but descriptive representation of motion sequences. We first break the motion sequences to short-term movements called motion words, and then cluster the words in a high-dimensional feature space to find motifs. Hence, motifs are words that are both common and descriptive, and their distribution represents the motion sequence. To cluster words and find motifs, the challenge is to define an effective feature space, where the distances among motion words are semantically meaningful, and where variations in speed and duration are handled. To this end, we use a deep neural network to embed the motion&amp;nbsp;words into feature space using a triplet loss function. To define a signature, we choose a finite set of motion-motifs, creating a bag-of-motifs representation for the sequence. Motion signatures are agnostic to movement order, speed or duration variations, and can distinguish fine-grained differences between motions of the same class. We illustrate examples of characterizing motion sequences by motifs, and for the use of motion signatures in anumber of applications.&lt;/p&gt;</description>
    <description descriptionType="Other">This work has been partly supported by the project that has received funding from the European Union's Horizon 2020 research and innovation programme under grant agreement No 739578 (RISE – Call: H2020-WIDESPREAD-01-2016-2017-TeamingPhase2)  and the Government of the Republic of Cyprus through the Directorate General for European Programmes, Coordination and Development. 

@2018 ACM Copyright under the CC-BY-NC-ND 4.0 license (http://creativecommons.org/licenses/by-nc-nd/4.0/)

Citation: Andreas Aristidou, Daniel Cohen-Or, Jessica K. Hodgins, Yiorgos Chrysanthou, and Ariel Shamir. 2018. Deep motifs and motion signatures. ACM Trans. Graph. 37, 6, Article 187 (December 2018), 13 pages. DOI: https://doi.org/10.1145/3272127.3275038</description>
  </descriptions>
  <fundingReferences>
    <fundingReference>
      <funderName>European Commission</funderName>
      <funderIdentifier funderIdentifierType="Crossref Funder ID">10.13039/501100000780</funderIdentifier>
      <awardNumber awardURI="info:eu-repo/grantAgreement/EC/H2020/739578/">739578</awardNumber>
      <awardTitle>Research Center on Interactive Media, Smart System and Emerging Technologies</awardTitle>
    </fundingReference>
  </fundingReferences>
</resource>
70
40
views
downloads
Views 70
Downloads 40
Data volume 1.0 GB
Unique views 56
Unique downloads 39

Share

Cite as