Journal article Open Access

FIVR: Fine-grained Incident Video Retrieval

Kordopatis-Zilos, Giorgos; Papadopoulos, Symeon; Patras, Ioannis; Kompatsiaris, Yiannis


MARC21 XML Export

<?xml version='1.0' encoding='UTF-8'?>
<record xmlns="http://www.loc.gov/MARC21/slim">
  <leader>00000nam##2200000uu#4500</leader>
  <datafield tag="653" ind1=" " ind2=" ">
    <subfield code="a">Fine-grained Incident Video Retrieval</subfield>
  </datafield>
  <datafield tag="653" ind1=" " ind2=" ">
    <subfield code="a">near-duplicate  videos</subfield>
  </datafield>
  <datafield tag="653" ind1=" " ind2=" ">
    <subfield code="a">video  retrieval</subfield>
  </datafield>
  <controlfield tag="005">20190802063204.0</controlfield>
  <controlfield tag="001">3238223</controlfield>
  <datafield tag="700" ind1=" " ind2=" ">
    <subfield code="u">CERTH-ITI, Thessaloniki, Greece</subfield>
    <subfield code="a">Papadopoulos, Symeon</subfield>
  </datafield>
  <datafield tag="700" ind1=" " ind2=" ">
    <subfield code="u">Queen Mary University of London, UK</subfield>
    <subfield code="a">Patras, Ioannis</subfield>
  </datafield>
  <datafield tag="700" ind1=" " ind2=" ">
    <subfield code="u">CERTH-ITI, Thessaloniki, Greece</subfield>
    <subfield code="a">Kompatsiaris, Yiannis</subfield>
  </datafield>
  <datafield tag="856" ind1="4" ind2=" ">
    <subfield code="s">1820005</subfield>
    <subfield code="z">md5:7a581d33dec64a5b52734ea0197bc061</subfield>
    <subfield code="u">https://zenodo.org/record/3238223/files/fivr_authors_copy.pdf</subfield>
  </datafield>
  <datafield tag="542" ind1=" " ind2=" ">
    <subfield code="l">open</subfield>
  </datafield>
  <datafield tag="260" ind1=" " ind2=" ">
    <subfield code="c">2019-03-18</subfield>
  </datafield>
  <datafield tag="909" ind1="C" ind2="O">
    <subfield code="p">openaire</subfield>
    <subfield code="p">user-invid-h2020</subfield>
    <subfield code="o">oai:zenodo.org:3238223</subfield>
  </datafield>
  <datafield tag="100" ind1=" " ind2=" ">
    <subfield code="u">CERTH-ITI, Thessaloniki, Greece / Queen Mary University of London, UK</subfield>
    <subfield code="a">Kordopatis-Zilos, Giorgos</subfield>
  </datafield>
  <datafield tag="245" ind1=" " ind2=" ">
    <subfield code="a">FIVR: Fine-grained Incident Video Retrieval</subfield>
  </datafield>
  <datafield tag="980" ind1=" " ind2=" ">
    <subfield code="a">user-invid-h2020</subfield>
  </datafield>
  <datafield tag="536" ind1=" " ind2=" ">
    <subfield code="c">687786</subfield>
    <subfield code="a">In Video Veritas – Verification of Social Media Video Content for the News Industry</subfield>
  </datafield>
  <datafield tag="540" ind1=" " ind2=" ">
    <subfield code="u">http://creativecommons.org/licenses/by/4.0/legalcode</subfield>
    <subfield code="a">Creative Commons Attribution 4.0 International</subfield>
  </datafield>
  <datafield tag="650" ind1="1" ind2="7">
    <subfield code="a">cc-by</subfield>
    <subfield code="2">opendefinition.org</subfield>
  </datafield>
  <datafield tag="520" ind1=" " ind2=" ">
    <subfield code="a">&lt;p&gt;This paper introduces the problem of Fine-grained Incident Video Retrieval (FIVR). Given a query video, the objective is to retrieve all associated videos, considering several types of associations that range from duplicate videos to videos from the same incident. FIVR offers a single framework that contains several retrieval tasks as special cases. To address the benchmarking needs of all such tasks, we construct and present a large-scale annotated video dataset, which we call FIVR-200K, and it comprises 225,960 videos. To create the dataset, we devise a process for the collection of YouTube videos based on major news events from recent years crawled from Wikipedia and deploy a retrieval pipeline for the automatic selection of query videos based on their estimated suitability as benchmarks. We also devise a protocol for the annotation of the dataset with respect to the four types of video associations defined by FIVR. Finally, we report the results of an experimental study on the dataset comparing five state-of-the-art methods developed based on a variety of visual descriptors, highlighting the challenges of the current problem.&lt;/p&gt;</subfield>
  </datafield>
  <datafield tag="773" ind1=" " ind2=" ">
    <subfield code="n">doi</subfield>
    <subfield code="i">isSupplementedBy</subfield>
    <subfield code="a">10.5281/zenodo.2564864</subfield>
  </datafield>
  <datafield tag="024" ind1=" " ind2=" ">
    <subfield code="a">10.1109/TMM.2019.2905741</subfield>
    <subfield code="2">doi</subfield>
  </datafield>
  <datafield tag="024" ind1=" " ind2=" ">
    <subfield code="q">alternateidentifier</subfield>
    <subfield code="a">https://arxiv.org/abs/1809.04094</subfield>
    <subfield code="2">url</subfield>
  </datafield>
  <datafield tag="980" ind1=" " ind2=" ">
    <subfield code="a">publication</subfield>
    <subfield code="b">article</subfield>
  </datafield>
</record>
10
8
views
downloads
Views 10
Downloads 8
Data volume 14.6 MB
Unique views 9
Unique downloads 7

Share

Cite as