{ "access": { "embargo": { "active": false, "reason": null }, "files": "public", "record": "public", "status": "open" }, "created": "2017-12-04T16:28:55.784750+00:00", "custom_fields": { "journal:journal": { "title": "Multimedia Signal Processing (MMSP)" }, "meeting:meeting": { "acronym": "MMSP", "dates": "17-18 October 2017", "title": "2017 IEEE 19th International Workshop on Multimedia Signal Processing" } }, "deletion_status": { "is_deleted": false, "status": "P" }, "files": { "count": 1, "enabled": true, "entries": { "MAIN.pdf": { "checksum": "md5:a856dd1418247c1e4407f4c52e0da262", "ext": "pdf", "id": "abde1577-5219-4feb-90d6-8027293425f5", "key": "MAIN.pdf", "metadata": null, "mimetype": "application/pdf", "size": 1340136 } }, "order": [], "total_bytes": 1340136 }, "id": "1078509", "is_draft": false, "is_published": true, "links": { "access": "https://zenodo.org/api/records/1078509/access", "access_links": "https://zenodo.org/api/records/1078509/access/links", "access_request": "https://zenodo.org/api/records/1078509/access/request", "access_users": "https://zenodo.org/api/records/1078509/access/users", "archive": "https://zenodo.org/api/records/1078509/files-archive", "archive_media": "https://zenodo.org/api/records/1078509/media-files-archive", "communities": "https://zenodo.org/api/records/1078509/communities", "communities-suggestions": "https://zenodo.org/api/records/1078509/communities-suggestions", "doi": "https://doi.org/10.1109/MMSP.2017.8122222", "draft": "https://zenodo.org/api/records/1078509/draft", "files": "https://zenodo.org/api/records/1078509/files", "latest": "https://zenodo.org/api/records/1078509/versions/latest", "latest_html": "https://zenodo.org/records/1078509/latest", "media_files": "https://zenodo.org/api/records/1078509/media-files", "parent": "https://zenodo.org/api/records/1078508", "parent_doi": "https://zenodo.org/doi/", "parent_html": "https://zenodo.org/records/1078508", "requests": "https://zenodo.org/api/records/1078509/requests", "reserve_doi": "https://zenodo.org/api/records/1078509/draft/pids/doi", "self": "https://zenodo.org/api/records/1078509", "self_doi": "https://zenodo.org/doi/10.1109/MMSP.2017.8122222", "self_html": "https://zenodo.org/records/1078509", "self_iiif_manifest": "https://zenodo.org/api/iiif/record:1078509/manifest", "self_iiif_sequence": "https://zenodo.org/api/iiif/record:1078509/sequence/default", "versions": "https://zenodo.org/api/records/1078509/versions" }, "media_files": { "count": 0, "enabled": false, "entries": {}, "order": [], "total_bytes": 0 }, "metadata": { "creators": [ { "affiliations": [ { "name": "Dipartimento di Elettronica, Informazione e Bioingegneria, Politecnico di Milano, Piazza Leonardo da Vinci 32 - 20133 Milano, Italy" } ], "person_or_org": { "family_name": "Buccoli", "given_name": "Michele", "name": "Buccoli, Michele", "type": "personal" } }, { "affiliations": [ { "name": "Dipartimento di Elettronica, Informazione e Bioingegneria, Politecnico di Milano, Piazza Leonardo da Vinci 32 - 20133 Milano, Italy" } ], "person_or_org": { "family_name": "Di Giorgi", "given_name": "Bruno", "name": "Di Giorgi, Bruno", "type": "personal" } }, { "affiliations": [ { "name": "Dipartimento di Elettronica, Informazione e Bioingegneria, Politecnico di Milano, Piazza Leonardo da Vinci 32 - 20133 Milano, Italy" } ], "person_or_org": { "family_name": "Zanoni", "given_name": "Massimiliano", "name": "Zanoni, Massimiliano", "type": "personal" } }, { "affiliations": [ { "name": "Dipartimento di Elettronica, Informazione e Bioingegneria, Politecnico di Milano, Piazza Leonardo da Vinci 32 - 20133 Milano, Italy" } ], "person_or_org": { "family_name": "Antonacci", "given_name": "Fabio", "name": "Antonacci, Fabio", "type": "personal" } }, { "affiliations": [ { "name": "Dipartimento di Elettronica, Informazione e Bioingegneria, Politecnico di Milano, Piazza Leonardo da Vinci 32 - 20133 Milano, Italy" } ], "person_or_org": { "family_name": "Sarti", "given_name": "Augusto", "name": "Sarti, Augusto", "type": "personal" } } ], "description": "
Motion analysis and tracking often relies on multimodal signals, e.g., video, depth map, motion capture (MoCap), due to the completeness of information they jointly provide. The joint analysis of multimodal signals requires to know the correct timing, i.e., the signals to be aligned. In this paper we propose an approach to automatically estimate the correct matching and alignment between a video and a MoCap recording acquired from the same session, based on the multi-dimensional correlation of velocity-based features extracted from the two recordings. We validate our approach over a dataset of dance recordings of four genres, and we achieve promising results for both the alignment and matching scenarios.
", "publication_date": "2017-10-18", "publisher": "Zenodo", "resource_type": { "id": "publication-conferencepaper", "title": { "de": "Konferenzbeitrag", "en": "Conference paper" } }, "rights": [ { "description": { "en": "The Creative Commons Attribution license allows re-distribution and re-use of a licensed work on the condition that the creator is appropriately credited." }, "icon": "cc-by-icon", "id": "cc-by-4.0", "props": { "scheme": "spdx", "url": "https://creativecommons.org/licenses/by/4.0/legalcode" }, "title": { "en": "Creative Commons Attribution 4.0 International" } } ], "subjects": [ { "subject": "Correlation" }, { "subject": "Feature extraction" }, { "subject": "Reliability" }, { "subject": "Streaming media" }, { "subject": "Cameras" }, { "subject": "Three-dimensional displays" } ], "title": "Using multi-dimensional correlation for matching and alignment of MoCap and video signals" }, "parent": { "access": { "owned_by": { "user": 29325 } }, "communities": { "default": "724ef045-76b6-4122-81ab-140a5b6b4da5", "entries": [ { "access": { "member_policy": "open", "members_visibility": "public", "record_policy": "open", "review_policy": "open", "visibility": "public" }, "children": { "allow": false }, "created": "2017-09-11T16:35:55.143671+00:00", "custom_fields": {}, "deletion_status": { "is_deleted": false, "status": "P" }, "id": "724ef045-76b6-4122-81ab-140a5b6b4da5", "links": {}, "metadata": { "curation_policy": "The community is meant to serve as an archive of all the scientific material produced throughout the project, including journal articles, conference proceedings, presentations, posters, public deliverables and reports.
\r\n\r\nAll above-mentioned materials are accepted providing they are strictly relevant to the project and open access publishing is allowed within the publishing agreement.
\r\n", "page": "WhoLoDancE is an EU funded innovation project. By applying multimodal technologies such as motion capture, similarity search, computational models, automated analysis of non-verbal expressive features, movement content analysis, and complex data analytics, WhoLoDancE aims at innovating dance teaching methods and choreographic composition, while also preserving the European dance cultural heritage. - www.wholodance.eu
\r\n\r\nWholodance is a Research and Innovation Action funded under the European Union’s Horizon 2020 Programme. The project aims at developing and applying breakthrough technologies to Dance Learning in order to achieve results that will have relevant impacts on numerous targets including, but not limited to, the dance practitioners ranging from Researchers and Professionals to Dance Students and the Interested Public. Wholodance focuses on five main Objectives, described below.
\r\n\r\n1. Investigate bodily knowledge by applying similarity search tools, computational models, emotional content analysis and techniques for the automated analysis of non-verbal expressive movement to dance data that will help investigate movement and learning principles, vocabularies, mental imagery and simulation connected to Dance Practises.
\r\n\r\n2. Preserve the cultural heritage by creating a proof-of-concept motion capture repository of dance motions built-in methods allowing interpolations, extrapolations and synthesis through similarity search among different compositions documenting diverse and specialized dance movement practices, and learning approaches.
\r\n\r\n3. Innovate the teaching of dance by developing among others a life-size volumetric display that will enable a dance student to literally step inside the Dance master’s body that through the use of immersive and responsive motion capture data, will Identify and respond to collisions between the physical and virtual bodies.
\r\n\r\n4. Revolutionize choreography by building and structuring an interactive repository of motion capture dance libraries. Custom dance data blending engine will give choreographers and dance teachers a powerful tool to blend and assemble an infinite number of dance compositions.
\r\n\r\n5. Widen the access and practice of dance by providing access to the created dance database through commercially available consumer-grade motion capture devices like the MS Kinect, Intel’s real sense and others.
", "title": "WhoLoDancE" }, "revision_id": 0, "slug": "wholodance_eu", "updated": "2018-08-24T12:36:47.865439+00:00" } ], "ids": [ "724ef045-76b6-4122-81ab-140a5b6b4da5" ] }, "id": "1078508", "pids": {} }, "pids": { "doi": { "identifier": "10.1109/MMSP.2017.8122222", "provider": "external" }, "oai": { "identifier": "oai:zenodo.org:1078509", "provider": "oai" } }, "revision_id": 9, "stats": { "all_versions": { "data_volume": 226482984.0, "downloads": 169, "unique_downloads": 167, "unique_views": 164, "views": 168 }, "this_version": { "data_volume": 226482984.0, "downloads": 169, "unique_downloads": 167, "unique_views": 163, "views": 167 } }, "status": "published", "updated": "2020-01-20T17:20:16.528880+00:00", "versions": { "index": 1, "is_latest": true } }