Conference paper Open Access

Detecting tampered videos with multimedia forensics and deep learning

Zampoglou, Markos; Markatopoulou, Foteini; Mercier, Gregoire; Touska, Despoina; Apostolidis, Evlampios; Papadopoulos, Symeon; Cozien, Roger; Patras, Ioannis; Mezaris, Vasileios; Kompatsiaris, Ioannis


MARC21 XML Export

<?xml version='1.0' encoding='UTF-8'?>
<record xmlns="http://www.loc.gov/MARC21/slim">
  <leader>00000nam##2200000uu#4500</leader>
  <datafield tag="041" ind1=" " ind2=" ">
    <subfield code="a">eng</subfield>
  </datafield>
  <datafield tag="653" ind1=" " ind2=" ">
    <subfield code="a">Video forensics</subfield>
  </datafield>
  <datafield tag="653" ind1=" " ind2=" ">
    <subfield code="a">Video tampering detection</subfield>
  </datafield>
  <datafield tag="653" ind1=" " ind2=" ">
    <subfield code="a">Video verification</subfield>
  </datafield>
  <datafield tag="653" ind1=" " ind2=" ">
    <subfield code="a">Video manipulation detection</subfield>
  </datafield>
  <datafield tag="653" ind1=" " ind2=" ">
    <subfield code="a">User-generated video</subfield>
  </datafield>
  <controlfield tag="005">20190410032304.0</controlfield>
  <controlfield tag="001">2539137</controlfield>
  <datafield tag="711" ind1=" " ind2=" ">
    <subfield code="d">8-11 January 2019</subfield>
    <subfield code="g">MMM 2019</subfield>
    <subfield code="a">25th International Conference on Multimedia Modeling</subfield>
    <subfield code="c">Thessaloniki, Greece</subfield>
  </datafield>
  <datafield tag="700" ind1=" " ind2=" ">
    <subfield code="u">Information Technologies Institute / Centre for Research &amp; Technology Hellas</subfield>
    <subfield code="a">Markatopoulou, Foteini</subfield>
  </datafield>
  <datafield tag="700" ind1=" " ind2=" ">
    <subfield code="u">eXo maKina, Paris, France</subfield>
    <subfield code="a">Mercier, Gregoire</subfield>
  </datafield>
  <datafield tag="700" ind1=" " ind2=" ">
    <subfield code="u">Information Technologies Institute / Centre for Research &amp; Technology Hellas</subfield>
    <subfield code="a">Touska, Despoina</subfield>
  </datafield>
  <datafield tag="700" ind1=" " ind2=" ">
    <subfield code="u">School of EECS, Queen Mary University of London, UK</subfield>
    <subfield code="a">Apostolidis, Evlampios</subfield>
  </datafield>
  <datafield tag="700" ind1=" " ind2=" ">
    <subfield code="u">Information Technologies Institute / Centre for Research &amp; Technology Hellas</subfield>
    <subfield code="a">Papadopoulos, Symeon</subfield>
  </datafield>
  <datafield tag="700" ind1=" " ind2=" ">
    <subfield code="u">eXo maKina, Paris, France</subfield>
    <subfield code="a">Cozien, Roger</subfield>
  </datafield>
  <datafield tag="700" ind1=" " ind2=" ">
    <subfield code="u">School of EECS, Queen Mary University of London, UK</subfield>
    <subfield code="a">Patras, Ioannis</subfield>
  </datafield>
  <datafield tag="700" ind1=" " ind2=" ">
    <subfield code="u">Information Technologies Institute / Centre for Research &amp; Technology Hellas</subfield>
    <subfield code="a">Mezaris, Vasileios</subfield>
  </datafield>
  <datafield tag="700" ind1=" " ind2=" ">
    <subfield code="u">Information Technologies Institute / Centre for Research &amp; Technology Hellas</subfield>
    <subfield code="a">Kompatsiaris, Ioannis</subfield>
  </datafield>
  <datafield tag="856" ind1="4" ind2=" ">
    <subfield code="s">1507165</subfield>
    <subfield code="z">md5:ac4366cefbb63a0f4b28fed320b558a2</subfield>
    <subfield code="u">https://zenodo.org/record/2539137/files/mmm19_lncs11295_2_preprint.pdf</subfield>
  </datafield>
  <datafield tag="542" ind1=" " ind2=" ">
    <subfield code="l">open</subfield>
  </datafield>
  <datafield tag="856" ind1="4" ind2=" ">
    <subfield code="y">Conference website</subfield>
    <subfield code="u">http://mmm2019.iti.gr/</subfield>
  </datafield>
  <datafield tag="260" ind1=" " ind2=" ">
    <subfield code="c">2019-01-10</subfield>
  </datafield>
  <datafield tag="909" ind1="C" ind2="O">
    <subfield code="p">openaire</subfield>
    <subfield code="p">user-invid-h2020</subfield>
    <subfield code="o">oai:zenodo.org:2539137</subfield>
  </datafield>
  <datafield tag="100" ind1=" " ind2=" ">
    <subfield code="u">Information Technologies Institute / Centre for Research &amp; Technology Hellas</subfield>
    <subfield code="a">Zampoglou, Markos</subfield>
  </datafield>
  <datafield tag="245" ind1=" " ind2=" ">
    <subfield code="a">Detecting tampered videos with multimedia forensics and deep learning</subfield>
  </datafield>
  <datafield tag="980" ind1=" " ind2=" ">
    <subfield code="a">user-invid-h2020</subfield>
  </datafield>
  <datafield tag="536" ind1=" " ind2=" ">
    <subfield code="c">687786</subfield>
    <subfield code="a">In Video Veritas – Verification of Social Media Video Content for the News Industry</subfield>
  </datafield>
  <datafield tag="540" ind1=" " ind2=" ">
    <subfield code="u">http://creativecommons.org/licenses/by/4.0/legalcode</subfield>
    <subfield code="a">Creative Commons Attribution 4.0 International</subfield>
  </datafield>
  <datafield tag="650" ind1="1" ind2="7">
    <subfield code="a">cc-by</subfield>
    <subfield code="2">opendefinition.org</subfield>
  </datafield>
  <datafield tag="520" ind1=" " ind2=" ">
    <subfield code="a">&lt;p&gt;User-Generated Content (UGC) has become an integral part of the news reporting cycle. As a result, the need to verify videos collected from social media and Web sources is becoming increasingly important for news organisations. While video verication is attracting a lot of attention, there has been limited effort so far in applying video forensics to real-world data. In this work we present an approach for automatic video manipulation detection inspired by manual verication approaches. In a typical manual verication setting, video filter outputs are visually interpreted by human experts. We use two such forensics filters designed for manual verication, one based on Discrete Cosine Transform (DCT) coefficients and a second based on video requantization errors, and combine them with Deep Convolutional Neural Networks (CNN) designed for image classication. We compare the performance of the proposed approach to other works from the state of the art, and discover that, while competing approaches perform better when trained with videos from the same dataset, one of the proposed filters demonstrates superior performance in cross-dataset settings. We discuss the implications of our work and the limitations of the current experimental setup, and propose directions for future research in this area.&lt;/p&gt;</subfield>
  </datafield>
  <datafield tag="024" ind1=" " ind2=" ">
    <subfield code="a">10.1007/978-3-030-05710-7_31</subfield>
    <subfield code="2">doi</subfield>
  </datafield>
  <datafield tag="980" ind1=" " ind2=" ">
    <subfield code="a">publication</subfield>
    <subfield code="b">conferencepaper</subfield>
  </datafield>
</record>
156
88
views
downloads
Views 156
Downloads 88
Data volume 132.6 MB
Unique views 145
Unique downloads 77

Share

Cite as