Conference paper Open Access

Performance over Random: A Robust Evaluation Protocol for Video Summarization Methods

Apostolidis, Evlampios; Adamantidou, Eleni; Metsai, Alexandros; Mezaris, Vasileios; Patras, Ioannis

Dublin Core Export

<?xml version='1.0' encoding='utf-8'?>
<oai_dc:dc xmlns:dc="" xmlns:oai_dc="" xmlns:xsi="" xsi:schemaLocation="">
  <dc:creator>Apostolidis, Evlampios</dc:creator>
  <dc:creator>Adamantidou, Eleni</dc:creator>
  <dc:creator>Metsai, Alexandros</dc:creator>
  <dc:creator>Mezaris, Vasileios</dc:creator>
  <dc:creator>Patras, Ioannis</dc:creator>
  <dc:description>This paper proposes a new evaluation approach for video summarization algorithms. We start by studying the currently established evaluation protocol; this protocol, defined over the ground-truth annotations of the SumMe and TVSum datasets, quantifies the agreement between the user-defined and the automatically-created summaries with F-Score, and reports the average performance on a few different training/testing splits of the used dataset. We evaluate five publicly-available summarization algorithms under a largescale experimental setting with 50 randomly-created data splits. We show that the results reported in the papers are not always congruent with their performance on the large-scale experiment, and that the F-Score cannot be used for comparing algorithms evaluated on different splits. We also show that the above shortcomings of the established evaluation protocol are due to the significantly varying levels of difficulty among the utilized splits, that affect the outcomes of the evaluations. Further analysis of these findings indicates a noticeable performance correlation among all algorithms and a random summarizer. To mitigate these shortcomings we propose an evaluation protocol that makes estimates about the difficulty of each used data split and utilizes this information during the evaluation process. Experiments involving different evaluation settings demonstrate the increased representativeness of performance results when using the proposed evaluation approach, and the increased reliability of comparisons when the examined methods have been evaluated on different data splits.</dc:description>
  <dc:subject>Video summarization</dc:subject>
  <dc:subject>Performance over Random</dc:subject>
  <dc:subject>Evaluation protocol</dc:subject>
  <dc:subject>Random performance</dc:subject>
  <dc:subject>Human performance</dc:subject>
  <dc:subject>Pearson correlation coefficient</dc:subject>
  <dc:title>Performance over Random: A Robust Evaluation Protocol for Video Summarization Methods</dc:title>
Views 298
Downloads 70
Data volume 135.1 MB
Unique views 297
Unique downloads 70


Cite as