Conference paper Open Access

Unsupervised Video Summarization via Attention-Driven Adversarial Learning

Apostolidis, Evlampios; Adamantidou, Eleni; Metsai, Alexandros; Mezaris, Vasileios; Patras, Ioannis


Dublin Core Export

<?xml version='1.0' encoding='utf-8'?>
<oai_dc:dc xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:oai_dc="http://www.openarchives.org/OAI/2.0/oai_dc/" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://www.openarchives.org/OAI/2.0/oai_dc/ http://www.openarchives.org/OAI/2.0/oai_dc.xsd">
  <dc:creator>Apostolidis, Evlampios</dc:creator>
  <dc:creator>Adamantidou, Eleni</dc:creator>
  <dc:creator>Metsai, Alexandros</dc:creator>
  <dc:creator>Mezaris, Vasileios</dc:creator>
  <dc:creator>Patras, Ioannis</dc:creator>
  <dc:date>2020-01-06</dc:date>
  <dc:description>This paper presents a new video summarization approach that integrates an attention mechanism to identify the significant parts of the video, and is trained unsupervisingly via generative adversarial learning. Starting from the SUM-GAN model, we rst develop an improved version of it (called SUM-GAN-sl) that has a significantly reduced number of learned parameters, performs incremental training of the model's components, and applies a stepwise label-based strategy for updating the adversarial part. Subsequently, we introduce an attention mechanism to SUM-GAN-sl in two ways: i) by integrating an attention layer within the variational auto-encoder (VAE) of the architecture (SUM-GAN-VAAE), and ii) by replacing the VAE with a deterministic attention auto-encoder (SUM-GAN-AAE). Experimental evaluation on two datasets (SumMe and TVSum) documents the contribution of the attention auto-encoder to faster and more stable training of the model, resulting in a signicant performance improvement with respect to the original model and demonstrating the competitiveness of the proposed SUM-GAN-AAE against the state of the art. Software is publicly available at: https://github.com/e-apostolidis/SUM-GAN-AAE</dc:description>
  <dc:identifier>https://zenodo.org/record/3605501</dc:identifier>
  <dc:identifier>10.1007/978-3-030-37731-1_40</dc:identifier>
  <dc:identifier>oai:zenodo.org:3605501</dc:identifier>
  <dc:relation>info:eu-repo/grantAgreement/EC/H2020/780656/</dc:relation>
  <dc:relation>url:https://zenodo.org/communities/retv-h2020</dc:relation>
  <dc:rights>info:eu-repo/semantics/openAccess</dc:rights>
  <dc:rights>https://creativecommons.org/licenses/by/4.0/legalcode</dc:rights>
  <dc:subject>Video summarization</dc:subject>
  <dc:subject>Unsupervised learning</dc:subject>
  <dc:subject>Attention mechanism</dc:subject>
  <dc:subject>Adversarial learning</dc:subject>
  <dc:title>Unsupervised Video Summarization via Attention-Driven Adversarial Learning</dc:title>
  <dc:type>info:eu-repo/semantics/conferencePaper</dc:type>
  <dc:type>publication-conferencepaper</dc:type>
</oai_dc:dc>
530
182
views
downloads
Views 530
Downloads 182
Data volume 178.1 MB
Unique views 516
Unique downloads 171

Share

Cite as