Dataset Open Access

# Artificially-generated Lecture Video Fragmentation Dataset and Ground Truth

D. Galanopoulos; V. Mezaris

### Dublin Core Export

<?xml version='1.0' encoding='utf-8'?>
<oai_dc:dc xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:oai_dc="http://www.openarchives.org/OAI/2.0/oai_dc/" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://www.openarchives.org/OAI/2.0/oai_dc/ http://www.openarchives.org/OAI/2.0/oai_dc.xsd">
<dc:creator>D. Galanopoulos</dc:creator>
<dc:creator>V. Mezaris</dc:creator>
<dc:date>2018-10-15</dc:date>
<dc:description>We provide a large-scale lecture video dataset consisting of artificially-generated lectures, and the corresponding ground-truth fragmentation, for the purpose of evaluating lecture video fragmentation techniques.

For creating this dataset, 1498 speech transcript files (generated automatically by ASR software) were used from the world's biggest academic online video repository, the VideoLectures.NET. These transcripts correspond to lectures from various fields of science, such as Computer science, Mathematics, Medicine, Politics etc. In order to create the synthetic video lectures, all transcripts were randomly split in fragments, the duration of which ranges between 4 and 8 minutes. Each synthetic lecture was then assembled by combining (stitching) exactly 20 randomly selected fragments. 300 such artificially-generated lectures are included in the released dataset. Each such lecture file has a mean duration of about 120 minutes, thus the dataset contains altogether about 600 hours of artificially-generated lectures. Every pair of consecutive fragments in these lectures originally comes from different videos, consequently the point in time where such two fragments are joined is a known ground-truth fragment boundary. All these boundaries form the dataset's ground truth. We should stress that we do not generate the corresponding video files for the artificially-generated lectures (only the transcripts), and one should not try to reverse-engineer the dataset creation process so as to use in some way the visual modality for detecting the fragments in this dataset.

File format

After you download the provided .zip and unpack it, the extracted folder will contain two sub-folders:

1. ALV_srt
2. ALV_srt_GT

Each of them contains 300 files.

The ALV_srt folder contains the transcripts of every artificially-generated lecture, in the standard SRT format:

1. A numeric counter identifying each sequential subtitle
2. The time that the subtitle should appear on the screen, followed by --&gt; and the time it should disappear
3. Subtitle's text itself on one or more lines
4. A blank line containing no text

The ALV_srt_GT folder contains the ground truth (GT) fragments corresponding to the lectures (transcripts) of the ALV_srt folder. Each GT file consists of 3 tab-separated columns and 20 rows, in the following format:

&lt;Fragment_ID_1&gt;	&lt;StartTime_1&gt;	&lt;EndTime_1&gt;
&lt;Fragment_ID_2&gt;	&lt;StartTime_2&gt;	&lt;EndTime_2&gt;
&lt;Fragment_ID_3&gt;	&lt;StartTime_3&gt;	&lt;EndTime_3&gt;
.
.
.
&lt;Fragment_ID_20&gt;	&lt;StartTime_20&gt;	&lt;EndTime_20&gt;

Each row indicates a fragment. The first column indicates the ID of a fragment while the second and the third column indicate the start and the end time of the fragment respectively.

This dataset is provided for academic, non-commercial use only. If you find this dataset useful in your work, please cite the following publication where the dataset is introduced:

D. Galanopoulos, V. Mezaris, “Temporal Lecture Video Fragmentation using Word Embeddings”, Proc. 25th Int. Conf. on Multimedia Modeling (MMM2019), Thessaloniki, Greece, Jan. 2019.

Acknowledgements

This work was supported by the EU’s Horizon 2020 research and innovation programme under grant agreement No 693092 MOVING. We are grateful to JSI/VideoLectures.NET for providing the lectures’ transcripts.</dc:description>
<dc:identifier>https://zenodo.org/record/1462432</dc:identifier>
<dc:identifier>10.5281/zenodo.1462432</dc:identifier>
<dc:identifier>oai:zenodo.org:1462432</dc:identifier>
<dc:relation>info:eu-repo/grantAgreement/EC/H2020/693092/</dc:relation>
<dc:relation>doi:10.5281/zenodo.1462431</dc:relation>
<dc:relation>url:https://zenodo.org/communities/moving-h2020</dc:relation>
<dc:rights>info:eu-repo/semantics/openAccess</dc:rights>
<dc:title>Artificially-generated Lecture Video Fragmentation Dataset and Ground Truth</dc:title>
<dc:type>info:eu-repo/semantics/other</dc:type>
<dc:type>dataset</dc:type>
</oai_dc:dc>

171
13
views