Book section Embargoed Access

Concept-Based and Event-Based Video Search in Large Video Collections

Foteini Markatopoulou; Damianos Galanopoulos; Christos Tselepis; Vasileios Mezaris; Ioannis Patras


MARC21 XML Export

<?xml version='1.0' encoding='UTF-8'?>
<record xmlns="http://www.loc.gov/MARC21/slim">
  <leader>00000nam##2200000uu#4500</leader>
  <datafield tag="942" ind1=" " ind2=" ">
    <subfield code="a">2021-03-15</subfield>
  </datafield>
  <datafield tag="653" ind1=" " ind2=" ">
    <subfield code="a">concept-based search</subfield>
  </datafield>
  <datafield tag="653" ind1=" " ind2=" ">
    <subfield code="a">event-based search</subfield>
  </datafield>
  <datafield tag="653" ind1=" " ind2=" ">
    <subfield code="a">video concept detection</subfield>
  </datafield>
  <datafield tag="653" ind1=" " ind2=" ">
    <subfield code="a">event detection</subfield>
  </datafield>
  <controlfield tag="005">20190424075649.0</controlfield>
  <controlfield tag="001">2649178</controlfield>
  <datafield tag="700" ind1=" " ind2=" ">
    <subfield code="u">CERTH</subfield>
    <subfield code="a">Damianos Galanopoulos</subfield>
  </datafield>
  <datafield tag="700" ind1=" " ind2=" ">
    <subfield code="u">CERTH</subfield>
    <subfield code="a">Christos Tselepis</subfield>
  </datafield>
  <datafield tag="700" ind1=" " ind2=" ">
    <subfield code="u">CERTH</subfield>
    <subfield code="a">Vasileios Mezaris</subfield>
  </datafield>
  <datafield tag="700" ind1=" " ind2=" ">
    <subfield code="u">Queen Mary University</subfield>
    <subfield code="a">Ioannis Patras</subfield>
  </datafield>
  <datafield tag="542" ind1=" " ind2=" ">
    <subfield code="l">embargoed</subfield>
  </datafield>
  <datafield tag="260" ind1=" " ind2=" ">
    <subfield code="c">2019-03-15</subfield>
  </datafield>
  <datafield tag="909" ind1="C" ind2="O">
    <subfield code="p">user-emma-h2020</subfield>
    <subfield code="p">user-invid-h2020</subfield>
    <subfield code="p">user-moving-h2020</subfield>
    <subfield code="o">oai:zenodo.org:2649178</subfield>
  </datafield>
  <datafield tag="100" ind1=" " ind2=" ">
    <subfield code="u">CERTH</subfield>
    <subfield code="a">Foteini Markatopoulou</subfield>
  </datafield>
  <datafield tag="245" ind1=" " ind2=" ">
    <subfield code="a">Concept-Based and Event-Based Video Search in Large Video Collections</subfield>
  </datafield>
  <datafield tag="980" ind1=" " ind2=" ">
    <subfield code="a">user-emma-h2020</subfield>
  </datafield>
  <datafield tag="980" ind1=" " ind2=" ">
    <subfield code="a">user-invid-h2020</subfield>
  </datafield>
  <datafield tag="980" ind1=" " ind2=" ">
    <subfield code="a">user-moving-h2020</subfield>
  </datafield>
  <datafield tag="540" ind1=" " ind2=" ">
    <subfield code="u">http://creativecommons.org/licenses/by/4.0/legalcode</subfield>
    <subfield code="a">Creative Commons Attribution 4.0 International</subfield>
  </datafield>
  <datafield tag="650" ind1="1" ind2="7">
    <subfield code="a">cc-by</subfield>
    <subfield code="2">opendefinition.org</subfield>
  </datafield>
  <datafield tag="520" ind1=" " ind2=" ">
    <subfield code="a">&lt;p&gt;Video content can be annotated with semantic information such as simple concept labels that may refer to objects (e.g., &amp;ldquo;car&amp;rdquo; and &amp;ldquo;chair&amp;rdquo;), activities (e.g., &amp;ldquo;running&amp;rdquo; and &amp;ldquo;dancing&amp;rdquo;), scenes (e.g., &amp;ldquo;hills&amp;rdquo; and &amp;ldquo;beach&amp;rdquo;), etc.; or more complex (or highlevel) events that describe the main action that takes place in the complete video. An event may refer to complex activities, occurring at specific places and times, which involve people interacting with other people and/or object(s), such as &amp;ldquo;changing a vehicle tire&amp;rdquo;, &amp;ldquo;making a cake&amp;rdquo;, or &amp;ldquo;attempting a bike trick&amp;rdquo;, etc. Concept-based and event-based video search refers to the retrieval of videos/video fragments (e.g., keyframes) that present specific simple concept labels or more complex events from large-scale video collections, respectively. To deal with concept-based video search, video concept detection methods have been developed that automatically annotate video-fragments with semantic labels (concepts). Then, given a specific concept a ranking component retrieves the top related video fragments for this concept. While significant progress has been made during the last years in video concept detection, it continues to be a difficult and challenging task. This is due to the diversity in form and appearance exhibited by the majority of semantic concepts and the difficulty to express them using a finite number of representations. A recent trend is to learn features directly from the raw keyframe pixels using deep convolutional neural networks (DCNNs). Other studies focus on combining many different video representations in order to capture different perspectives of the visual information. Finally, there are studies that focus on multi-task learning in order to exploit concept model sharing, and methods that look for existing semantic relations e.g., concept correlations. In contrast to concept detection, where we most often can use annotated training data&lt;br&gt;
for learning the detectors, in the problem of video event detection we can distinguish two different but equally important cases: when a number of positive examples, or no positive examples at all (&amp;ldquo;zero-example&amp;rdquo; case), are available for training. In the first case, a typical video event detection framework includes a feature extraction and a classification stage, where an event detector is learned by training one or more classifiers for each event class using available features (sometimes similarly to the learning of concept detectors), usually followed by a fusion approach in order to combine different modalities. In the latter case, where solely a textual description is available for each event class, the research community has directed its efforts towards effectively combining textual and visual analysis techniques, such as using text analysis techniques, exploiting large sets of DCNN-based concept detectors and using various re-ranking methods, such as pseudo-relevance feedback, or self-paced re-ranking. In this chapter, we survey the literature and we present our research efforts towards improving concept- and event-based video search. For concept-based video search, we focus on i) feature extraction using hand-crafted and DCNN-based descriptors, ii) dimensionality reduction using accelerated generalised subclass discriminant analysis (AGSDA), iii) cascades of hand-crafted and DCNN-based descriptors, iv) multi-task learning (MTL) to exploit model sharing and v) stacking architectures to exploit concept relations. For video event detection, we focus on methods which exploit positive examples, when available, again using DCNN-based features and AGSDA, and we also develop a framework for zero-example event detection that associates the textual description of an event class with the available visual concepts in order to identify the most relevant concepts regarding the event class. Additionally, we present a pseudorelevant feedback mechanism that relies on AGSDA.&lt;/p&gt;</subfield>
  </datafield>
  <datafield tag="773" ind1=" " ind2=" ">
    <subfield code="b">Wiley</subfield>
    <subfield code="z">9781119376972</subfield>
    <subfield code="t">Big Data Analytics for Large-Scale Multimedia Search</subfield>
  </datafield>
  <datafield tag="024" ind1=" " ind2=" ">
    <subfield code="a">10.1002/9781119376996.ch2</subfield>
    <subfield code="2">doi</subfield>
  </datafield>
  <datafield tag="980" ind1=" " ind2=" ">
    <subfield code="a">publication</subfield>
    <subfield code="b">section</subfield>
  </datafield>
</record>
20
4
views
downloads
Views 20
Downloads 4
Data volume 7.3 MB
Unique views 16
Unique downloads 4

Share

Cite as