Conference paper Open Access

VideoAnalysis4ALL: An On-line Tool for the Automatic Fragmentation and Concept-based Annotation, and the Interactive Exploration of Videos

Collyda, Chrysa; Apostolidis, Evlampios; Pournaras, Alexandros; Markatopoulou, Foteini; Mezaris, Vasileios; Patras, Ioannis


MARC21 XML Export

<?xml version='1.0' encoding='UTF-8'?>
<record xmlns="http://www.loc.gov/MARC21/slim">
  <leader>00000nam##2200000uu#4500</leader>
  <datafield tag="653" ind1=" " ind2=" ">
    <subfield code="a">Web-based on-line video analysis</subfield>
  </datafield>
  <datafield tag="653" ind1=" " ind2=" ">
    <subfield code="a">Video segmentation</subfield>
  </datafield>
  <datafield tag="653" ind1=" " ind2=" ">
    <subfield code="a">Video annotation</subfield>
  </datafield>
  <datafield tag="653" ind1=" " ind2=" ">
    <subfield code="a">Video content exploration</subfield>
  </datafield>
  <controlfield tag="005">20200120170456.0</controlfield>
  <controlfield tag="001">809700</controlfield>
  <datafield tag="711" ind1=" " ind2=" ">
    <subfield code="d">6-9 June 2017</subfield>
    <subfield code="g">ICMR 2017</subfield>
    <subfield code="a">ACM International Conference on Multimedia Retrieval 2017</subfield>
    <subfield code="c">Bucharest, Romania</subfield>
  </datafield>
  <datafield tag="700" ind1=" " ind2=" ">
    <subfield code="u">Information Technologies Institute (ITI) - Centre for Research and Technology Hellas (CERTH)</subfield>
    <subfield code="a">Apostolidis, Evlampios</subfield>
  </datafield>
  <datafield tag="700" ind1=" " ind2=" ">
    <subfield code="u">Information Technologies Institute (ITI) - Centre for Research and Technology Hellas (CERTH)</subfield>
    <subfield code="a">Pournaras, Alexandros</subfield>
  </datafield>
  <datafield tag="700" ind1=" " ind2=" ">
    <subfield code="u">Information Technologies Institute (ITI) - Centre for Research and Technology Hellas (CERTH)</subfield>
    <subfield code="a">Markatopoulou, Foteini</subfield>
  </datafield>
  <datafield tag="700" ind1=" " ind2=" ">
    <subfield code="u">Information Technologies Institute (ITI) - Centre for Research and Technology Hellas (CERTH)</subfield>
    <subfield code="a">Mezaris, Vasileios</subfield>
  </datafield>
  <datafield tag="700" ind1=" " ind2=" ">
    <subfield code="u">Queen Mary University of London</subfield>
    <subfield code="a">Patras, Ioannis</subfield>
  </datafield>
  <datafield tag="856" ind1="4" ind2=" ">
    <subfield code="s">10029539</subfield>
    <subfield code="z">md5:5638a84fa9a04de243782102fffa21b6</subfield>
    <subfield code="u">https://zenodo.org/record/809700/files/icmr17_3_preprint.pdf</subfield>
  </datafield>
  <datafield tag="542" ind1=" " ind2=" ">
    <subfield code="l">open</subfield>
  </datafield>
  <datafield tag="856" ind1="4" ind2=" ">
    <subfield code="y">Conference website</subfield>
    <subfield code="u">http://icmr2017.ro/index.php</subfield>
  </datafield>
  <datafield tag="260" ind1=" " ind2=" ">
    <subfield code="c">2017-06-08</subfield>
  </datafield>
  <datafield tag="909" ind1="C" ind2="O">
    <subfield code="p">openaire</subfield>
    <subfield code="p">user-emma-h2020</subfield>
    <subfield code="p">user-invid-h2020</subfield>
    <subfield code="p">user-moving-h2020</subfield>
    <subfield code="o">oai:zenodo.org:809700</subfield>
  </datafield>
  <datafield tag="100" ind1=" " ind2=" ">
    <subfield code="u">Information Technologies Institute (ITI) - Centre for Research and Technology Hellas (CERTH)</subfield>
    <subfield code="a">Collyda, Chrysa</subfield>
  </datafield>
  <datafield tag="245" ind1=" " ind2=" ">
    <subfield code="a">VideoAnalysis4ALL: An On-line Tool for the Automatic Fragmentation and Concept-based Annotation, and the Interactive Exploration of Videos</subfield>
  </datafield>
  <datafield tag="980" ind1=" " ind2=" ">
    <subfield code="a">user-emma-h2020</subfield>
  </datafield>
  <datafield tag="980" ind1=" " ind2=" ">
    <subfield code="a">user-invid-h2020</subfield>
  </datafield>
  <datafield tag="980" ind1=" " ind2=" ">
    <subfield code="a">user-moving-h2020</subfield>
  </datafield>
  <datafield tag="536" ind1=" " ind2=" ">
    <subfield code="c">693092</subfield>
    <subfield code="a">Training towards a society of data-savvy information professionals to enable open leadership innovation</subfield>
  </datafield>
  <datafield tag="536" ind1=" " ind2=" ">
    <subfield code="c">687786</subfield>
    <subfield code="a">In Video Veritas – Verification of Social Media Video Content for the News Industry</subfield>
  </datafield>
  <datafield tag="536" ind1=" " ind2=" ">
    <subfield code="c">732665</subfield>
    <subfield code="a">Enriching Market solutions for content Management and publishing with state of the art multimedia Analysis techniques</subfield>
  </datafield>
  <datafield tag="540" ind1=" " ind2=" ">
    <subfield code="u">http://creativecommons.org/licenses/by/4.0/legalcode</subfield>
    <subfield code="a">Creative Commons Attribution 4.0 International</subfield>
  </datafield>
  <datafield tag="650" ind1="1" ind2="7">
    <subfield code="a">cc-by</subfield>
    <subfield code="2">opendefinition.org</subfield>
  </datafield>
  <datafield tag="520" ind1=" " ind2=" ">
    <subfield code="a">&lt;p&gt;This paper presents the VideoAnalysis4ALL tool that supports the automatic fragmentation and concept-based annotation of videos, and the exploration of the annotated video fragments through an interactive user interface. The developed web application decomposes the video into two different granularities, namely shots and scenes, and annotates each fragment by evaluating the existence of a number (several hundreds) of high-level visual concepts in the keyframes extracted from these fragments. Through the analysis the tool enables the identification and labeling of semantically coherent video fragments, while its user interfaces allow the discovery of these fragments with the help of human-interpretable concepts. The integrated state-of-the-art video analysis technologies perform very well and, by exploiting the processing capabilities of multi-thread / multi-core architectures, reduce the time required for analysis to approximately one third of the video’s duration, thus making the analysis three times faster than real-time processing.&lt;/p&gt;</subfield>
  </datafield>
  <datafield tag="024" ind1=" " ind2=" ">
    <subfield code="a">10.1145/3078971.3079015</subfield>
    <subfield code="2">doi</subfield>
  </datafield>
  <datafield tag="980" ind1=" " ind2=" ">
    <subfield code="a">publication</subfield>
    <subfield code="b">conferencepaper</subfield>
  </datafield>
</record>
70
40
views
downloads
Views 70
Downloads 40
Data volume 401.2 MB
Unique views 70
Unique downloads 38

Share

Cite as