Dataset Open Access

# Artificially-generated Lecture Video Fragmentation Dataset and Ground Truth

D. Galanopoulos; V. Mezaris

### JSON Export

{
"files": [
{
},
"checksum": "md5:8301657e416141754bdbc422b7465fa3",
"key": "Lecture_Video_Fragmentation_Dataset.zip",
"type": "zip",
"size": 15323358
}
],
"owners": [
22754
],
"doi": "10.5281/zenodo.1462432",
"stats": {
"unique_views": 78.0,
"views": 97.0,
"version_unique_views": 78.0,
"volume": 107263506.0,
"version_views": 97.0,
"version_volume": 107263506.0
},
"doi": "https://doi.org/10.5281/zenodo.1462432",
"conceptdoi": "https://doi.org/10.5281/zenodo.1462431",
"html": "https://zenodo.org/record/1462432",
"latest_html": "https://zenodo.org/record/1462432",
"latest": "https://zenodo.org/api/records/1462432"
},
"conceptdoi": "10.5281/zenodo.1462431",
"created": "2018-10-15T07:59:11.596312+00:00",
"updated": "2019-11-01T07:12:24.772961+00:00",
"conceptrecid": "1462431",
"revision": 32,
"id": 1462432,
"access_right_category": "success",
"doi": "10.5281/zenodo.1462432",
"description": "<p>We provide a large-scale lecture video dataset consisting of artificially-generated lectures, and the corresponding ground-truth fragmentation, for the purpose of evaluating lecture video fragmentation techniques.</p>\n\n<p>For creating this dataset, 1498 speech transcript files (generated automatically by ASR software) were used from the world&#39;s biggest academic online video repository, the VideoLectures.NET. These transcripts correspond to lectures from various fields of science, such as Computer science, Mathematics, Medicine, Politics etc. In order to create the synthetic video lectures, all transcripts were randomly split in fragments, the duration of which ranges between 4 and 8 minutes. Each synthetic lecture was then assembled by combining (stitching) exactly 20 randomly selected fragments. 300 such artificially-generated lectures are included in the released dataset. Each such lecture file has a mean duration of about 120 minutes, thus the dataset contains altogether about 600 hours of artificially-generated lectures. Every pair of consecutive fragments in these lectures originally comes from different videos, consequently the point in time where such two fragments are joined is a known ground-truth fragment boundary. All these boundaries form the dataset&#39;s ground truth. We should stress that we do not generate the corresponding video files for the artificially-generated lectures (only the transcripts), and one should not try to reverse-engineer the dataset creation process so as to use in some way the visual modality for detecting the fragments in this dataset.</p>\n\n<p><strong>File format</strong></p>\n\n<p>After you download the provided .zip and unpack it, the extracted folder will contain two sub-folders:</p>\n\n<pre><code>1. ALV_srt\n2. ALV_srt_GT\n</code></pre>\n\n<p>Each of them contains 300 files.</p>\n\n<p>The <strong>ALV_srt</strong> folder contains the transcripts of every artificially-generated lecture, in the standard SRT format:</p>\n\n<pre><code>1. A numeric counter identifying each sequential subtitle\n2. The time that the subtitle should appear on the screen, followed by --&gt; and the time it should disappear\n3. Subtitle's text itself on one or more lines\n4. A blank line containing no text\n</code></pre>\n\n<p>The <strong>ALV_srt_GT</strong> folder contains the ground truth (GT) fragments corresponding to the lectures (transcripts) of the <strong>ALV_srt</strong> folder. Each GT file consists of 3 tab-separated columns and 20 rows, in the following format:</p>\n\n<pre><code>&lt;Fragment_ID_1&gt;\t&lt;StartTime_1&gt;\t&lt;EndTime_1&gt;\n&lt;Fragment_ID_2&gt;\t&lt;StartTime_2&gt;\t&lt;EndTime_2&gt;\n&lt;Fragment_ID_3&gt;\t&lt;StartTime_3&gt;\t&lt;EndTime_3&gt;\n.\n.\n.\n&lt;Fragment_ID_20&gt;\t&lt;StartTime_20&gt;\t&lt;EndTime_20&gt;\n</code></pre>\n\n<p>Each row indicates a fragment. The first column indicates the ID of a fragment while the second and the third column indicate the start and the end time of the fragment respectively.</p>\n\n<p><strong>License and Citation</strong></p>\n\n<p>This dataset is provided for academic, non-commercial use only. If you find this dataset useful in your work, please cite the following publication where the dataset is introduced:</p>\n\n<p><em>D. Galanopoulos, V. Mezaris, &ldquo;Temporal Lecture Video Fragmentation using Word Embeddings&rdquo;, Proc. 25th Int. Conf. on Multimedia Modeling (MMM2019), Thessaloniki, Greece, Jan. 2019.</em></p>\n\n<p><strong>Acknowledgements</strong></p>\n\n<p>This work was supported by the EU&rsquo;s Horizon 2020 research and innovation programme under grant agreement No 693092 MOVING. We are grateful to JSI/VideoLectures.NET for providing the lectures&rsquo; transcripts.</p>",
"id": "CC-BY-SA-4.0"
},
"title": "Artificially-generated Lecture Video Fragmentation Dataset and Ground Truth",
"relations": {
"version": [
{
"count": 1,
"index": 0,
"parent": {
"pid_type": "recid",
"pid_value": "1462431"
},
"is_last": true,
"last_child": {
"pid_type": "recid",
"pid_value": "1462432"
}
}
]
},
"grants": [
{
"code": "693092",
"self": "https://zenodo.org/api/grants/10.13039/501100000780::693092"
},
"title": "Training towards a society of data-savvy information professionals to enable open leadership innovation",
"acronym": "MOVING",
"program": "H2020",
"funder": {
"doi": "10.13039/501100000780",
"acronyms": [
"EC"
],
"name": "European Commission",
"self": "https://zenodo.org/api/funders/10.13039/501100000780"
}
}
}
],
"communities": [
{
"id": "moving-h2020"
}
],
"publication_date": "2018-10-15",
"creators": [
{
"affiliation": "Centre for Research and Technology-Hellas (CERTH)",
"name": "D. Galanopoulos"
},
{
"affiliation": "Centre for Research and Technology-Hellas (CERTH)",
"name": "V. Mezaris"
}
],
"access_right": "open",
"resource_type": {
"type": "dataset",
"title": "Dataset"
},
"related_identifiers": [
{
"scheme": "doi",
"identifier": "10.5281/zenodo.1462431",
"relation": "isVersionOf"
}
]
}
}
97
7
views