Working paper Open Access
Batziou Elissavet; Michail Emmanouil; Avgerinakis Konstantinos; Vrochidis Stefanos; Patras Ioannis; Kompatsiaris Ioannis
<?xml version='1.0' encoding='utf-8'?> <resource xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns="http://datacite.org/schema/kernel-4" xsi:schemaLocation="http://datacite.org/schema/kernel-4 http://schema.datacite.org/meta/kernel-4.1/metadata.xsd"> <identifier identifierType="DOI">10.5281/zenodo.3491644</identifier> <creators> <creator> <creatorName>Batziou Elissavet</creatorName> <affiliation>Information Technologies Institute, Centre for Research and Technology Hellas</affiliation> </creator> <creator> <creatorName>Michail Emmanouil</creatorName> <affiliation>Information Technologies Institute, Centre for Research and Technology Hellas</affiliation> </creator> <creator> <creatorName>Avgerinakis Konstantinos</creatorName> <affiliation>Information Technologies Institute, Centre for Research and Technology Hellas</affiliation> </creator> <creator> <creatorName>Vrochidis Stefanos</creatorName> <affiliation>Information Technologies Institute, Centre for Research and Technology Hellas</affiliation> </creator> <creator> <creatorName>Patras Ioannis</creatorName> <affiliation>Queen Mary University of London</affiliation> </creator> <creator> <creatorName>Kompatsiaris Ioannis</creatorName> <affiliation>Information Technologies Institute, Centre for Research and Technology Hellas</affiliation> </creator> </creators> <titles> <title>Visual and audio analysis of movies video for emotion detection@ Emotional Impact of Movies task MediaEval 2018</title> </titles> <publisher>Zenodo</publisher> <publicationYear>2018</publicationYear> <subjects> <subject>MediaEval, emotion detection, movies video, Visual and audio analysis</subject> </subjects> <dates> <date dateType="Issued">2018-10-17</date> </dates> <resourceType resourceTypeGeneral="Text">Working paper</resourceType> <alternateIdentifiers> <alternateIdentifier alternateIdentifierType="url">https://zenodo.org/record/3491644</alternateIdentifier> </alternateIdentifiers> <relatedIdentifiers> <relatedIdentifier relatedIdentifierType="arXiv" relationType="IsDerivedFrom" resourceTypeGeneral="Preprint">arXiv:1909.01763</relatedIdentifier> <relatedIdentifier relatedIdentifierType="DOI" relationType="IsVersionOf">10.5281/zenodo.3491643</relatedIdentifier> </relatedIdentifiers> <rightsList> <rights rightsURI="https://creativecommons.org/licenses/by/4.0/legalcode">Creative Commons Attribution 4.0 International</rights> <rights rightsURI="info:eu-repo/semantics/openAccess">Open Access</rights> </rightsList> <descriptions> <description descriptionType="Abstract"><p>This work reports the methodology that CERTH-ITI team developed so as to recognize the emotional impact that movies have to its viewers in terms of valence/arousal and fear. More Specifically,deep convolutional neural networks and several machine learning techniques are utilized to extract visual features and classify them based on the predicted model, while audio features are also taken into account in the fear scenario, leading to highly accurate recognition rates.</p></description> </descriptions> </resource>
All versions | This version | |
---|---|---|
Views | 42 | 42 |
Downloads | 19 | 19 |
Data volume | 15.6 MB | 15.6 MB |
Unique views | 39 | 39 |
Unique downloads | 16 | 16 |