2024-03-29T15:58:06Z
https://zenodo.org/oai2d
oai:zenodo.org:809937
2020-01-20T16:28:42Z
user-moving-h2020
openaire
user-eu
Foteini Markatopoulou
Damianos Galanopoulos
Vasileios Mezaris
Ioannis Patras
2017-06-06
<p>This paper presents a fully-automatic method that combines video concept detection and textual query analysis in order to solve the problem of ad-hoc video search. We present a set of NLP steps that cleverly analyse different parts of the query in order to convert it to related semantic concepts, we propose a new method for transforming concept-based keyframe and query representations into a common semantic embedding space, and we show that our proposed combination of concept-based representations with their corresponding semantic embeddings results to improved video search accuracy. Our experiments in the TRECVID AVS 2016 and the Video Search 2008 datasets show the effectiveness of the proposed method compared to other similar approaches.</p>
https://doi.org/10.5281/zenodo.809937
oai:zenodo.org:809937
Zenodo
https://zenodo.org/communities/moving-h2020
https://zenodo.org/communities/eu
https://doi.org/10.5281/zenodo.809936
info:eu-repo/semantics/openAccess
Creative Commons Attribution 4.0 International
https://creativecommons.org/licenses/by/4.0/legalcode
ICMR 2017, 2017 ACM on International Conference on Multimedia Retrieval, Bucharest, Romania, 6-9 June 2017
video concept detection
textual query analysis
Query and Keyframe Representations for Ad-hoc Video Search
info:eu-repo/semantics/conferencePoster
oai:zenodo.org:1309216
2020-01-20T16:33:32Z
user-moving-h2020
Blume, Till
Scherp, Ansgar
2018-06-29
<p>Graph indices are a key to manage huge amounts of distributed graph data. Instance-level indices are available that focus on the fast retrieval of nodes. Furthermore, there are so-called schema-level indices focusing on summarizing nodes sharing common characteristics, i. e., the combination of attached types and used property-labels. We argue that there is not a one-size-fits-all schema-level index. Rather, a parameterized, formal model is needed that allows to quickly design, tailor, and compare different schema-level indices. We abstract from related works and provide the formal model FLuID using basic building blocks to flexibly define different schema-level indices. The FLuID model provides parameterized simple and complex schema elements together with four parameters. We show that all indices modeled in FLuID can be computed in O(n). Thus, FLuID enables us to efficiently implement, compare, and validate variants of schema-level indices tailored for specific application scenarios.</p>
https://doi.org/10.5281/zenodo.1309216
oai:zenodo.org:1309216
eng
Zenodo
https://zenodo.org/communities/moving-h2020
https://doi.org/10.5281/zenodo.1309215
info:eu-repo/semantics/openAccess
Creative Commons Attribution 4.0 International
https://creativecommons.org/licenses/by/4.0/legalcode
GvDB, GI-Workshop on Foundations of Databases, Wuppertal, Germany, 22-25 May 2018
linked data, schema-level indices, formal model
Towards Flexible Indices for Distributed Graph Data: The Formal Schema-level Index Model FLuID
info:eu-repo/semantics/conferencePaper
oai:zenodo.org:2649178
2021-03-15T00:27:19Z
user-moving-h2020
user-emma-h2020
user-invid-h2020
Foteini Markatopoulou
Damianos Galanopoulos
Christos Tselepis
Vasileios Mezaris
Ioannis Patras
2019-03-15
<p>Video content can be annotated with semantic information such as simple concept labels that may refer to objects (e.g., “car” and “chair”), activities (e.g., “running” and “dancing”), scenes (e.g., “hills” and “beach”), etc.; or more complex (or highlevel) events that describe the main action that takes place in the complete video. An event may refer to complex activities, occurring at specific places and times, which involve people interacting with other people and/or object(s), such as “changing a vehicle tire”, “making a cake”, or “attempting a bike trick”, etc. Concept-based and event-based video search refers to the retrieval of videos/video fragments (e.g., keyframes) that present specific simple concept labels or more complex events from large-scale video collections, respectively. To deal with concept-based video search, video concept detection methods have been developed that automatically annotate video-fragments with semantic labels (concepts). Then, given a specific concept a ranking component retrieves the top related video fragments for this concept. While significant progress has been made during the last years in video concept detection, it continues to be a difficult and challenging task. This is due to the diversity in form and appearance exhibited by the majority of semantic concepts and the difficulty to express them using a finite number of representations. A recent trend is to learn features directly from the raw keyframe pixels using deep convolutional neural networks (DCNNs). Other studies focus on combining many different video representations in order to capture different perspectives of the visual information. Finally, there are studies that focus on multi-task learning in order to exploit concept model sharing, and methods that look for existing semantic relations e.g., concept correlations. In contrast to concept detection, where we most often can use annotated training data<br>
for learning the detectors, in the problem of video event detection we can distinguish two different but equally important cases: when a number of positive examples, or no positive examples at all (“zero-example” case), are available for training. In the first case, a typical video event detection framework includes a feature extraction and a classification stage, where an event detector is learned by training one or more classifiers for each event class using available features (sometimes similarly to the learning of concept detectors), usually followed by a fusion approach in order to combine different modalities. In the latter case, where solely a textual description is available for each event class, the research community has directed its efforts towards effectively combining textual and visual analysis techniques, such as using text analysis techniques, exploiting large sets of DCNN-based concept detectors and using various re-ranking methods, such as pseudo-relevance feedback, or self-paced re-ranking. In this chapter, we survey the literature and we present our research efforts towards improving concept- and event-based video search. For concept-based video search, we focus on i) feature extraction using hand-crafted and DCNN-based descriptors, ii) dimensionality reduction using accelerated generalised subclass discriminant analysis (AGSDA), iii) cascades of hand-crafted and DCNN-based descriptors, iv) multi-task learning (MTL) to exploit model sharing and v) stacking architectures to exploit concept relations. For video event detection, we focus on methods which exploit positive examples, when available, again using DCNN-based features and AGSDA, and we also develop a framework for zero-example event detection that associates the textual description of an event class with the available visual concepts in order to identify the most relevant concepts regarding the event class. Additionally, we present a pseudorelevant feedback mechanism that relies on AGSDA.</p>
https://doi.org/10.1002/9781119376996.ch2
oai:zenodo.org:2649178
Wiley
https://zenodo.org/communities/moving-h2020
https://zenodo.org/communities/invid-h2020
https://zenodo.org/communities/emma-h2020
info:eu-repo/semantics/openAccess
Creative Commons Attribution 4.0 International
https://creativecommons.org/licenses/by/4.0/legalcode
concept-based search
event-based search
video concept detection
event detection
Concept-Based and Event-Based Video Search in Large Video Collections
info:eu-repo/semantics/bookPart
oai:zenodo.org:2711027
2020-01-20T16:30:08Z
user-moving-h2020
user-eu
Köhler, Thomas
Igel, Christoph
Wollersheim, Heinz-Werner
2018-09-01
<p>The article should be understood as an impulse for a workshop at the GMW Annual Meeting 2018. In particular, he analyzes the development of Technology Enhanced Learning (TEL) and Technology Enhanced Teaching (TET) in academic education with the aim of taking stock of current requirements and educational opportunities. The aim of the authors is the attempt to derive perspective scenarios for a largely digitized teaching, learning and testing in the study, teaching and further education of universities in Germany. Against the background of society as a whole advancing digitization as well as the high innovation dynamics of artificial intelligence (AI) technologies and the associated transformation and transformation processes, it can be assumed that the TEL and TET scenarios. The next step is to specify the prerequisites for and also the readiness of the university regarding the infrastructure and competence of the teachers, but also regarding organization and legal matters. This makes it compatible with current discussions such as the European Digital Competence Framework for Educators (DigCompEdu, see https://ec.europa.eu/jrc/en/digcompedu) and Artificial Intelligence in Europe (https://ec.europa.eu/digital-singlemarket/en/news/communication-artificial-intelligence-europe), which have been published in recent months.</p>
Der Beitrag ist als Impuls für einen Workshop auf der GMW Jahrestagung 2018 zu verstehen. Insbesondere analysiert dieser die Entwicklung des Technology Enhanced Learning (TEL) und Technology Enhanced Teaching (TET) in der akademischen Bildung mit dem Ziel einer Bestandsaufnahme aktueller Anforderungen und bildungstechnologischer Möglichkeiten. Ziel der Autoren ist der Versuch, daraus folgend perspektivische Szenarien für ein weitgehend digitalisiertes Lehren, Lernen und Prüfen in Studium, Lehre und Weiterbildung der Hochschulen in Deutschland abzuleiten. Vor dem Hintergrund der gesamtgesellschaftlich voranschreitenden Digitalisierung ebenso wie der hohen Innovationsdynamik von Technologien der Künstlichen Intelligenz (KI) und den damit einhergehenden Veränderungs- und Transformationsprozessen ist anzunehmen, dass mit den o.g. TEL- und TET-Szenarien zentrale Entwicklungen angesprochen werden. Im nächsten Schritt können Überlegungen zu den Voraussetzungen an und auch der Readiness der Hochschule, betreffend Infrastruktur und Kompetenz der Lehrkräfte, aber auch betreffend Organisation und rechtliche Belange, spezifiziert werden. Damit wird eine Anschlussfähigkeit zu aktuellen Diskussionen wie dem europäischen Digital Competence Framework for Educators (DigCompEdu, vgl. https://ec.europa.eu/jrc/en/digcompedu)sowie Artificial Intelligence in Europe (https://ec.europa.eu/digital-singlemarket/en/news/communication-artificial-intelligence-europe) hergestellt, die in den letzten Monaten publiziert wurden.
https://doi.org/10.5281/zenodo.2711027
oai:zenodo.org:2711027
Waxmann
https://zenodo.org/communities/moving-h2020
https://zenodo.org/communities/eu
https://doi.org/10.5281/zenodo.2711026
info:eu-repo/semantics/openAccess
Creative Commons Attribution 4.0 International
https://creativecommons.org/licenses/by/4.0/legalcode
Technology Enhanced Learning (TEL); Technology Enhanced Teaching (TET); Academic Education; Forecast; European Digital Competence Framework for Educators
Szenarien des Technology Enhanced Learning (TEL) und Technology Enhanced Teaching (TET) in der akademischen Bildung 2028
info:eu-repo/semantics/bookPart
oai:zenodo.org:2698504
2020-01-20T14:04:10Z
user-moving-h2020
user-eu
Andrzej M.J. Skulimowski
2018-06-05
<p>Large digital knowledge repositories (DKRs) are an important component of Open Science. The latter is a challenging area of rapid technological and social development, where ICT innovations and their use cases are coordinated with research policy measures at different levels, from regional to European. DKR development is deeply rooted in the PEST environment and re­quires a thorough strategic plan of the social, economic and research impact over a mid to long-term perspective. It should also be aligned with technological progress, specifically in the emerging areas of Artificial Intelligence, Big Data, Internet of Things, and Global Expert Systems.</p>
<p>Despite of the aforementioned relevance, there exist very few publicly accessible DKR strategies or desc­riptions of strategy building approaches, and those available refer only to e-learning course reposi­tories. This has created a need to develop methodological foundations for DKR-oriented strategic planning, to build new ICT-based tools and collaborative approaches, and apply them to satisfy the project goals in the context of EU S&T policies. This paper reports the strategy building process for an innovative knowledge repository (referred to as the Platform) developed within the flagship Horizon 2020 project “Training towards a society of data-savvy inform­ation professionals to enable open leadership Innovation” (acronym MOVING, <a href="http://www.moving-project.eu">www.moving-project.eu</a>), its methodological background and outcomes. Relevant input to the strategy results from a forward-looking activity focussed on the identification of internal and environmental factors influencing the future performance and impact of the Platform. It combines a four-round/real-time novel policy and decision Delphi survey with an impact model established with Anticipatory Networks. The forecasting model parameters are those delivered as outcomes of the survey. They are supplied by the project partners, or result from the project Description of Work. The survey results are then used in a final collaborative roadmapping on the Platform exploitation in the PEST context.</p>
<p>The strategy building process presented here involves two stages. Stage 1 is devoted to establishing the boundary conditions for the Platform’s activity and user community building, while Stage 2 delivers the Final Strategy with plausible exploitation resulting scenarios. The second stage includes an action plan aimed at ensuring the Platform’s digital sustainability, financial viability and social acceptance. In this presentation of the DKR strategy building process, we will show in more detail the methodology of generating future visions of the Platform functioning with a flexi­ble Delphi survey support system that is based on a novel forward extrapolation methodology. It offers a variety of question and/or state­ment types, so­phi­sticated statistical analysis and other uncertainty handling methods, as well as a user-friendly interface. It can be run in various modes that suit the survey goals and gather expert knowledge in multiple rounds, as a real-time Delphi or as a hybrid of both. The cloud-based Delphi application is offe­red to the project team in SaaS mode, with some PaaS features (<a href="http://www.moving-survey.ipbf.eu">www.moving-survey.ipbf.eu</a>). It can also be used as a ba­sis for designing further customised expert information retrieval and fusion exercises for a broad spectrum of research needs.</p>
Available also from: https://ec.europa.eu/jrc/sites/jrcsh/files/fta2018-paper-c2-skulimowski.pdf
https://doi.org/10.5281/zenodo.2698504
oai:zenodo.org:2698504
eng
Zenodo
https://zenodo.org/communities/moving-h2020
https://zenodo.org/communities/eu
https://doi.org/10.5281/zenodo.2698503
info:eu-repo/semantics/openAccess
Creative Commons Attribution 4.0 International
https://creativecommons.org/licenses/by/4.0/legalcode
FTA 2019, 6th International Conference on Future-Oriented Technology Analysis (FTA) - Future in the making, Brussels, Belgium, 4-5 June 2018
Policy Delphi, Strategy building, Knowledge repositories, Information fusion
STRATEGY BUILDING FOR A KNOWLEDGE REPOSITORY WITH A NOVEL EXPERT INFORMATION FUSION TOOL
info:eu-repo/semantics/conferencePaper
oai:zenodo.org:1464113
2020-01-20T16:42:50Z
user-moving-h2020
user-eu
Mou, Wenxuan
Tzelepis, Christos
Mezaris, Vasileios
Gunes, Hatice
Patras, Ioannis
2018-10-03
<p>Automatic understanding and analysis of groups has attracted increasing attention in the vision and multimedia communities in recent years. However, little attention has been paid to the automatic analysis of the non-verbal behaviors and how this can be utilized for analysis of group membership, i.e., recognizing which group each individual is part of. This paper presents a novel Support Vector Machine (SVM) based Deep <em>Specific Recognition Model (DeepSRM)</em> that is learned based on a <em>generic recognition model</em>. The <em>generic recognition model</em> refers to the model trained with data across different conditions, i.e., when people are watching movies of different types. Although the <em>generic recognition model</em> can provide a baseline for the recognition model trained for each specific condition, the different behaviors people exhibit in different conditions limit the recognition performance of the generic model. Therefore, the <em>specific recognition model</em> is proposed for each condition separately and built on top of the <em>generic recognition model</em>. A number of experiments are conducted using a database aiming to study group analysis while each group (i.e., four participants together) were watching a number of long movie segments. Our experimental results show that the proposed <em>deep specific recognition model</em> (44%) outperforms the <em>generic recognition model</em> (26%). The recognition of group membership also indicates that the non-verbal behaviors of individuals within a group share commonalities.</p>
https://doi.org/10.1016/j.imavis.2018.09.005
oai:zenodo.org:1464113
Zenodo
https://zenodo.org/communities/moving-h2020
https://zenodo.org/communities/eu
info:eu-repo/semantics/openAccess
Creative Commons Attribution Non Commercial No Derivatives 4.0 International
https://creativecommons.org/licenses/by-nc-nd/4.0/legalcode
Image and Vision Computing, (2018-10-03)
Non-verbal behavior analysis, Group membership, Automatic group analysis, Deep learning
A Deep Generic to Specific Recognition Model for Group Membership Analysis using Non-verbal Cues
info:eu-repo/semantics/article
oai:zenodo.org:439051
2020-01-20T16:13:50Z
user-moving-h2020
T. Köhler
A. Scherp
C. Koschtial
C. Felden
S. Herbst
2016-11-01
<p>MOVING project presentation at: Synergie. Fachmagazin für Digitalisierung in der Lehre ( https://www.synergie.uni-hamburg.de )</p>
https://doi.org/10.5281/zenodo.439051
oai:zenodo.org:439051
Zenodo
https://zenodo.org/communities/moving-h2020
https://doi.org/
info:eu-repo/semantics/openAccess
Creative Commons Attribution 4.0 International
https://creativecommons.org/licenses/by/4.0/legalcode
MOVING H2020
eScience-Forschungsmethodik – ein neuer Ansatz für eine kollaborative Wissenschaft
info:eu-repo/semantics/other
oai:zenodo.org:1059011
2020-01-20T16:22:06Z
user-moving-h2020
user-eu
Andrzej M.J. Skulimowski
Andrzej M.J. Skulimowski
2017-06-11
<p>This paper presents an overview of the cognitive aspects of content recommendation process in large heterogeneous knowledge repositories. It also covers applications to design algorithms of incremental learning of users’ preferences, emotions, and satisfaction. This allows the recommendation procedures to align to the present and expected cognitive states of a user, increasing combined recommendation and repository use efficiency. The learning algorithm takes into account the results of the cognitive and neural modelling of users’ decision behaviour. Inspirations from nature used in recommendation systems differ from the usual mimicking of biological neural processes. Specifically, a cognitive knowledge recommender may follow a strategy to discover emotional patterns in user behaviour and then adjust the recommendation procedure accordingly. The knowledge of cognitive decision mechanisms helps to optimize recommendation goals. Other cognitive recommendation procedures assist users in creating consistent learning or research groups. The anticipated primary application field of the above algorithms is a large knowledge repository coupled with an innovative training platform developed within the ongoing Horizon 2020 research project MOVING.</p>
This is an author's version of the paper. The original version typeset by Springer and included in LNCS Vol.10246 is available from https://link.springer.com/chapter/10.1007/978-3-319-59060-8_52
https://doi.org/10.1007/978-3-319-59060-8_52
oai:zenodo.org:1059011
Zenodo
https://zenodo.org/communities/moving-h2020
https://zenodo.org/communities/eu
info:eu-repo/semantics/openAccess
Creative Commons Attribution 4.0 International
https://creativecommons.org/licenses/by/4.0/legalcode
Lecture Notes in Computer Science / Lecture Notes in Artificial Intelligence, 10246, 574-588, (2017-06-11)
ICAISC, The 16th International Conference on Artificial Intelligence and Soft Computing, Zakopane, Poland, 11-15 June 2017
Research recommenders, scientific big data, Personal Learning Environments, preference modelling, mobile and ubiquitous learning
Cognitive Content Recommendation in Digital Knowledge Repositories – a Survey of Recent Trends
info:eu-repo/semantics/conferencePaper
oai:zenodo.org:345104
2020-01-20T15:57:03Z
user-moving-h2020
Böschen, Falk
Scherp, Ansgar
2016-12-31
<p>So far, there has not been a comparative evaluation of different approaches for text extraction from scholarly figures. In order to fill this gap, we have defined a generic pipeline for text extraction that abstracts from the existing approaches as documented in the literature. In this paper, we use this generic pipeline to systematically evaluate and compare 32 configurations for text extraction over four datasets of scholarly figures of different origin and characteristics. In total, our experiments have been run over more than 400 manually labeled figures. The experimental results show that the approach BS-4OS results in the best F-measure of 0.67 for the Text Location Detection and the best average Levenshtein Distance of 4.71 between the recognized text and the gold standard on all four datasets using the Ocropy OCR engine.</p>
https://doi.org/10.1007/978-3-319-51811-4_2
oai:zenodo.org:345104
Zenodo
https://zenodo.org/communities/moving-h2020
info:eu-repo/semantics/openAccess
Creative Commons Attribution 4.0 International
https://creativecommons.org/licenses/by/4.0/legalcode
MMM2017, 23rd International Conference on Multimedia Modeling, Reykjavik, Iceland, 04-06 January 2017
Scholarly Figures
Text Extraction
Comparison
A Comparison of Approaches for Automated Text Extraction from Scholarly Figures
info:eu-repo/semantics/conferencePaper
oai:zenodo.org:1135136
2020-01-20T16:11:28Z
user-moving-h2020
Saleh, Ahmed
Mai, Florian
Nishioka, Chifumi
Scherp, Ansgar
2018-01-04
<p>An enormous volume of scientific content is published every year.The amount exceeds by far what a scientist can read in her entire life.In order to address this problem, we have developed and empirically evaluated a recommender system for scientific papers based on Twitter postings. In this paper, we improve on the previous work by a reranking approach using Deep Learning. Thus, after a list of top-k recommendations is computed, we rerank the results by employing a neural network to improve the results of the existing recommender system. We present the design of the deep reranking approach and a preliminary evaluation. Our results show that in most cases, the recommendations can be improved using our Deep Learning reranking approach.</p>
https://doi.org/10.5281/zenodo.1135136
oai:zenodo.org:1135136
eng
Zenodo
https://zenodo.org/communities/moving-h2020
https://doi.org/10.5281/zenodo.1135135
info:eu-repo/semantics/openAccess
Creative Commons Attribution 4.0 International
https://creativecommons.org/licenses/by/4.0/legalcode
INFORMATIK 2017, 47. Jahrestagung der Gesellschaft für Informatik, Chemnitz, Germany, 25-29 September 2017
recommender systems
deep learning
semantic profiling
Reranking-based Recommender System with Deep Learning
info:eu-repo/semantics/conferencePaper
oai:zenodo.org:1137640
2020-01-20T16:29:21Z
user-moving-h2020
openaire
user-eu
Franziska Günther
2018-01-08
<p>This poster introduces the methodology of modelling user requirements for the MOVING platform.</p>
https://doi.org/10.5281/zenodo.1137640
oai:zenodo.org:1137640
eng
Zenodo
https://zenodo.org/communities/moving-h2020
https://zenodo.org/communities/eu
https://doi.org/10.5281/zenodo.1137639
info:eu-repo/semantics/openAccess
Creative Commons Attribution 4.0 International
https://creativecommons.org/licenses/by/4.0/legalcode
user requirements
platform
MOVING
Modelling user requirements to develop a platform enabling data-savvy information professionals
info:eu-repo/semantics/conferencePoster
oai:zenodo.org:2606910
2020-01-20T16:50:27Z
user-moving-h2020
Backes, Tobias
2018-10-22
<p>In this work, we address the problem of blocking in the context of author name disambiguation. We describe a framework that formalizes different ways of name-matching to determine which names could potentially refer to the same author. We focus on name variations that follow from specifying a name with different completeness (i.e. full first name or only initial). We extend this framework by a simple way to define traditional, new and custom blocking schemes. Then, we evaluate different old and new schemes in the Web of Science. In this context we define and compare a new type of blocking schemes. Based on these results, we discuss the question whether name-matching can be used in blocking evaluation as a replacement of annotated author identifiers. Finally, we argue that blocking can have a strong impact on the application and evaluation of author disambiguation.</p>
https://doi.org/10.1145/3269206.3271699
oai:zenodo.org:2606910
eng
Zenodo
https://zenodo.org/communities/moving-h2020
info:eu-repo/semantics/openAccess
Creative Commons Attribution 4.0 International
https://creativecommons.org/licenses/by/4.0/legalcode
CIKM '18, The 27th ACM International Conference on Information and Knowledge Management, Torino, October 22–26, 2018
Author Disambiguation
Blocking
Name-Matching
The Impact of Name-Matching and Blocking on Author Disambiguation
info:eu-repo/semantics/conferencePaper
oai:zenodo.org:2843254
2020-01-24T19:24:47Z
user-moving-h2020
openaire_data
Böschen, Falk
Scherp, Ansgar
2019-05-15
<p>Scholarly figures are data visualizations like bar charts, pie charts, line graphs, maps, scatter plots or similar figures. Text extraction from scholarly figures is useful in many application scenarios, since text in scholarly figures often contains information that is not present in the surrounding text. This dataset is a corpus of 121 scholarly figures from the economics domain evaluating text extraction tools. We randomly extracted these figures from a corpus of 288,000 open access publications from <a href="https://www.econbiz.de/">EconBiz</a>. The dataset resembles a wide variety of scholarly figures from bar charts to maps. We manually labeled the figures to create the gold standard.</p>
<p>We adjusted the provided gold standard to have a uniform format for all datasets. Each figure is accompanied by a TSV file (tab-separated values) where each entry corresponds to a text line which has the following structure:</p>
<ul>
<li>X-coordinate of the center of the bounding box in pixel</li>
<li>Y-coordinate of the center of the bounding box in pixel</li>
<li>Width of the bounding box in pixel</li>
<li>Height of the bounding box in pixel</li>
<li>Rotation angle around its center in degree</li>
<li>Text inside the bounding box</li>
</ul>
<p>In addition we provide the ground truth in JSON format. A schema file is included in each dataset as well. The dataset is accompanied with a ReadMe file with further information about the figures and their origin.</p>
<p>If you use this dataset in your own work, please cite one of the papers in the references.</p>
https://doi.org/10.5281/zenodo.2843254
oai:zenodo.org:2843254
eng
Zenodo
https://zenodo.org/communities/moving-h2020
https://doi.org/10.5281/zenodo.2843253
info:eu-repo/semantics/openAccess
Creative Commons Attribution 4.0 International
https://creativecommons.org/licenses/by/4.0/legalcode
Text extraction
Evaluation
Scholarly figures
EconBiz Images for Text Extraction from Scholarly Figures
info:eu-repo/semantics/other
oai:zenodo.org:1403700
2020-01-20T16:31:26Z
user-moving-h2020
Blume, Till
Scherp, Ansgar
2018-08-25
<p>Semi-structured, schema-free data formats are used in many applications because their flexibility enables simple data exchange. Especially graph data formats like RDF have become well established in the Web of Data. For the Web of Data, it is known that data instances are not only added, changed, and removed regularly, but that their schemas are also subject to enormous changes over time. Unfortunately, the collection, indexing, and analysis of the evolution of data schemas on the web is still in its infancy. To enable a detailed analysis of the evolution of Linked Open Data, we lay the foundation for the implementation of incremental schema-level indices for the Web of Data. Unlike existing schema-level indices, incremental schema-level indices have an efficient update mechanism to avoid costly recomputations of the entire index. This enables us to monitor changes to data instances at schema-level, trace changes, and ultimately provide an always up-to-date schema-level index for the Web of Data. In this paper, we analyze in detail the challenges of updating arbitrary schema-level indices for the Web of Data. To this end, we extend our previously developed meta model FLuID. In addition, we outline an algorithm for performing the updates.</p>
https://doi.org/10.5281/zenodo.1403700
oai:zenodo.org:1403700
eng
Zenodo
https://zenodo.org/communities/moving-h2020
https://doi.org/10.5281/zenodo.1403699
info:eu-repo/semantics/openAccess
Creative Commons Attribution 4.0 International
https://creativecommons.org/licenses/by/4.0/legalcode
LWDA, Lernen. Wissen. Daten. Analysen., Mannheim, Germany, 22-24 August 2018
Incremental schema-level index
Schema computation
LOD
Towards an Incremental Schema-level Index for Distributed Linked Open Data Graphs
info:eu-repo/semantics/conferencePaper
oai:zenodo.org:2539243
2020-01-20T17:26:07Z
user-moving-h2020
I. Vagliano
F. Guenther
M. Heinz
A. Apaolaza
I. Bienia
G. Breitfuss
T. Blume
C. Collyda
A. Fessl
S. Gottfried
P. P. Hasitschka
J. Kellermann
T. Koehler
A. Maas
V. Mezaris
A. Saleh
A. Skulimowski
S. Thalmann
M. Vigo
A. Wertner
M. Wiese
A. Scherp
2018-10-17
<p>The MOVING platform enables its users to improve their information literacy by training how to exploit data mining methods in their daily research tasks. Its novel integrated working and training environment supports the education of data-savvy information professionals and allows them to address the big data and open innovation challenges.</p>
https://doi.org/10.1109/MMUL.2018.2873495
oai:zenodo.org:2539243
Zenodo
https://zenodo.org/communities/moving-h2020
info:eu-repo/semantics/openAccess
Creative Commons Attribution 4.0 International
https://creativecommons.org/licenses/by/4.0/legalcode
IEEE MultiMedia, 25(3), 8-21, (2018-10-17)
Open Innovation
Open Innovation in the Big Data Era with the MOVING Platform: An Integrated Working and Training Approach for Data-savvy Information Professionals
info:eu-repo/semantics/article
oai:zenodo.org:1193247
2020-01-20T16:19:31Z
user-moving-h2020
Nishioka, Chifumi
Scherp, Ansgar
2017-08-23
<p>Many Linked Open Data applications require fresh copies of RDF data at their local repositories. Since RDF documents constantly change and those changes are not automatically propagated to the LOD applications, it is important to regularly visit the RDF documents to refresh the local copies and keep them up-to-date. For this purpose, crawling strategies determine which RDF documents should be preferentially fetched. Traditional crawling strategies rely only on how an RDF document has been modified in the past. In contrast, we predict on the triple level whether a change will occur in the future. We use the weekly snapshots of the DyLDO dataset as well as the monthly snapshots of the Wikidata dataset. First, we conduct an in-depth analysis of the life span of triples in RDF documents. Through the analysis, we identify which triples are stable and which are ephemeral. We introduce different features based on the triples and apply a simple but effective linear regression model. Second, we propose a novel crawling strategy based on the linear regression model. We conduct two experimental setups where we vary the amount of available bandwidth as well as iteratively observe the quality of the local copies over time. The results demonstrate that the novel crawling strategy outperforms the state of the art in both setups.</p>
https://doi.org/10.1145/3106426.3106463
oai:zenodo.org:1193247
Zenodo
https://zenodo.org/communities/moving-h2020
info:eu-repo/semantics/openAccess
Creative Commons Attribution 4.0 International
https://creativecommons.org/licenses/by/4.0/legalcode
Web crawling
Resource Description Framework (RDF)
Data management systems
Temporal data
Keeping linked open data caches up-to-date by predicting the life-time of RDF triples
info:eu-repo/semantics/conferencePaper
oai:zenodo.org:1318202
2020-01-20T17:36:08Z
user-moving-h2020
Andrzej M.J. Skulimowski
2018-07-20
<p>The essence of the online Delphi method is multiple round interviews of an expert group who answer questions in a structured e-survey and verify hypotheses called Delphi statements or questions. These can be defined by the client contracting the study or the core experts employed by the Delphi supplier, usually a consulting or research institution. The questions can have various forms: binary ("yes/no"), qualitative - Likert-scale-valued, or quantitative. A flexible survey system should offer a variety of question and/or statement types, as well as a user-friendly interface to reply or verify them. Additionally, the replies should be analyzed by strict statistical and uncertainty handling methods to yield consistent recommendations to the survey stakeholders. The `decision Delphi' is a survey variant that corresponds most strictly to the needs of organizations that look for expert knowledge concerning specific technological, market or other business problems. Its characteristic feature is the participation of the client's staff or the decision makers themselves. The cloud-based application ForgnosisTM is a modern implementation of a decision Delphi endowed with sophisticated analytic features. This paper presents its capabilities based on a recent survey and analysis case carried out for the EU Horizon 2020 flagship project MOVING. This system is offered in SaaS mode, with some PaaS functionalities. The latter include a variety of programming tools for designing the organization's customized survey.</p>
https://doi.org/10.1109/SOCA.2017.33
oai:zenodo.org:1318202
eng
IEEE
https://zenodo.org/communities/moving-h2020
info:eu-repo/semantics/openAccess
Creative Commons Attribution 4.0 International
https://creativecommons.org/licenses/by/4.0/legalcode
10th SOCA, IEEE 10th Conference on Service-Oriented Computing and Applications (SOCA), 2017, Kanazawa, Japan, November 22-25, 2017
online Delphi survey; corporate foresight; multi-round Delphi; statistical analysis; group decision support; strategic decision making; technological scenarios
Expert Delphi Survey as a Cloud-Based Decision Support Service
info:eu-repo/semantics/conferencePaper
oai:zenodo.org:61397
2020-01-20T15:18:20Z
user-moving-h2020
openaire
user-eu
Nishioka, Chifumi
Scherp, Ansgar
2016-05-30
<p>The Linked Open Data (LOD) cloud is expanding continuously. Entities appear, change, and disappear over time. However, relatively little is known about the dynamics of the entities, i. e., the characteristics of their temporal evolution. In this paper, we employ clustering techniques over the dynamics of entities to determine common temporal patterns. We define an entity as RDF resource together with its attached RDF types and properties. The quality of the clusterings is evaluated using entity features such as the entities’ properties, RDF types, and pay-level domain. In addition, we investigate to what extend entities that share a feature value change together over time. As dataset, we use weekly LOD snapshots over a period of more than three years provided by the Dynamic Linked Data Observatory. Insights into the dynamics of entities on the LOD cloud has strong practical implications to any application requiring fresh caches of LOD. The range of applications is from determining crawling strategies for LOD, caching SPARQL queries, to programming against LOD, and recommending vocabularies for reusing LOD vocabularies. </p>
https://doi.org/10.5281/zenodo.61397
oai:zenodo.org:61397
Zenodo
https://zenodo.org/communities/moving-h2020
https://zenodo.org/communities/eu
https://doi.org/
info:eu-repo/semantics/openAccess
Creative Commons Attribution 4.0 International
https://creativecommons.org/licenses/by/4.0/legalcode
Information-theoretic Analysis of Entity Dynamics on the Linked Open Data Cloud
info:eu-repo/semantics/lecture
oai:zenodo.org:183125
2020-01-20T16:57:14Z
user-moving-h2020
user-invid-h2020
openaire
user-eu
Markatopoulou, Foteini
Moumtzidou, Anastasia
Galanopoulos, Damianos
Mironidis, Theodoros
Kaltsa, Vagia
Ioannidou, Anastasia
Symeonidis, Spyridon
Avgerinakis, Konstantinos
Andreadis, Stelios
Gialampoukidis, Ilias
Vrochidis, Stefanos
Briassouli, Alexia
Mezaris, Vasileios
Kompatsiaris, Ioannis
Patras, Ioannis
2016-11-14
<p>This paper provides an overview of the runs submitted to TRECVID 2016 by ITI-CERTH. ITI-CERTH participated in the Ad-hoc Video Search (AVS), Multimedia Event Detection (MED), Instance Search (INS) and Surveillance Event Detection (SED) tasks. Our AVS task participation is based on a method that combines the linguistic analysis of the query and the concept-based annotation of video fragments. In the MED task, in 000Ex task we exploit the textual description of an event class in order retrieve related videos, without using positive samples. Furthermore, in 010Ex and 1000Ex tasks, a kernel sub class version of our discriminant analysis method (KSDA) combined with a fast linear SVM is employed. The INS task is performed by employing VERGE, which is an interactive retrieval application that integrates retrieval functionalities that consider only visual information. For the surveillance event detection (SED) task, we deployed a novel activity detection algorithm that is based on Motion Boundary Activity Areas (MBAA), dense trajectories, Fisher vectors and an overlapping sliding window.</p>
https://doi.org/10.5281/zenodo.183125
oai:zenodo.org:183125
Zenodo
https://zenodo.org/communities/moving-h2020
https://zenodo.org/communities/invid-h2020
https://zenodo.org/communities/eu
https://doi.org/
info:eu-repo/semantics/openAccess
Creative Commons Attribution 4.0 International
https://creativecommons.org/licenses/by/4.0/legalcode
ITI-CERTH
TRECVID 2016
ITI-CERTH participation in TRECVID 2016
info:eu-repo/semantics/conferencePoster
oai:zenodo.org:2542818
2020-01-20T15:47:27Z
user-moving-h2020
Günther, Franziska
Barthold, Sabine
2019-01-17
<p>Der MOOC wird am Medienzentrum der TU Dresden entwickelt und wird auf der MOVING-Plattform gehostet, auf der sich die NutzerInnen kostenlos registrieren können. Ziel des MOOCs ist es, StudentInnen und junge ForscherInnen beim Erwerb akademischer Informationskompetenz für Web 2.0 Umgebungen zu unterstützen und ihnen einen Überblick über offene Wissenschaft zu geben. Zu diesem Zweck haben wir ein Curriculum entworfen, das jungen WissenschaftlerInnen aller Disziplinen eine breite Einführung in Science 2.0 und offene Forschungsmethoden bietet. Der MOOC wird jungen AkademikerInnen helfen zu verstehen, wie sie die Möglichkeiten des Internets nutzen können, um Informationen zu finden, abzurufen und für die eigene Forschung zu nutzen, Wissen zu organisieren, neue Ideen zu entwickeln, Netzwerke mit WissenschaftlerInnen, öffentlichen Einrichtungen, Zivilgesellschaft und privaten Unternehmen aufzubauen, um eine Kultur der Offenheit zu etablieren. Im folgenden Artikel wird der Design- und Entwicklungsprozess des MOOCs beschrieben.</p>
https://doi.org/10.5281/zenodo.2542818
oai:zenodo.org:2542818
deu
Zenodo
https://zenodo.org/communities/moving-h2020
https://doi.org/10.5281/zenodo.2542817
info:eu-repo/semantics/openAccess
Creative Commons Attribution 4.0 International
https://creativecommons.org/licenses/by/4.0/legalcode
Wissenschaft 2.0 und offene Forschungsmethoden vermitteln: Der MOOC "Science 2.0 and open research methods"
info:eu-repo/semantics/article
oai:zenodo.org:2640596
2020-01-20T14:17:50Z
user-moving-h2020
user-eu
Konstantinos Avgerinakis
Anastasia Moumtzidou
Damianos Galanopoulos
Georgios Orfanidis
Stelios Andreadis
Foteini Markatopoulou
Elissavet Batziou
Konstantinos Ioannidis
Stefanos Vrochidis
Vasileios Mezaris
Ioannis Kompatsiaris
2018-11-04
<p>This paper provides an overview of the runs submitted to TRECVID 2018 by ITI-CERTH. ITI-CERTH participated in the Ad-hoc Video Search (AVS), Instance Search (INS) and Activities in Extended Video (ActEV) tasks. CERTH's AVS task participation is based on a method that combines the linguistic analysis of the query with concept-based and semantic-embedding representations of video fragments. The INS task is performed by employing VERGE, which is an interactive retrieval application that integrates retrieval functionalities that consider mainly visual information. For the ActEV task, CERTH deploy a novel activity detection algorithm that is based on human detection in video frames, goal descriptors, dense trajectories, Fisher vectors and a discriminative action segmentation scheme.</p>
https://doi.org/10.5281/zenodo.2640596
oai:zenodo.org:2640596
eng
Zenodo
https://zenodo.org/communities/moving-h2020
https://zenodo.org/communities/eu
https://doi.org/10.5281/zenodo.2640595
info:eu-repo/semantics/openAccess
Creative Commons Attribution 4.0 International
https://creativecommons.org/licenses/by/4.0/legalcode
TRECVID, TREC Video Retrieval Evaluation, Gaithersburg, MD, USA, November 4-7, 2018
TRECVID
ITI-CERTH participation in TRECVID 2018
info:eu-repo/semantics/conferencePaper
oai:zenodo.org:159243
2020-01-20T15:52:14Z
user-moving-h2020
user-invid-h2020
openaire
user-eu
Christos Tzelepis
Eftichia Mavridaki
Vasileios Mezaris
Ioannis Patras
2016-09-25
<p>In this paper we propose a video aesthetic quality assessment method that combines the representation of each video according to a set of photographic and cinematographic rules, with the use of a learning method that takes the video representation's uncertainty into consideration. Specifically, our method exploits the information derived from both low- and high-level analysis of video layout, leading to a photo- and motion-based video representation scheme. Subsequently, a kernel Support Vector Machine (SVM) extension, the KSVM-iGSU, is trained to classify the videos and retrieve those of high aesthetic value. Experimental results on our large dataset verify the effectiveness of the proposed method. We also make publicly available our dataset, in order to facilitate research in the area of video aesthetic quality assessment.</p>
https://doi.org/10.5281/zenodo.159243
oai:zenodo.org:159243
Zenodo
https://zenodo.org/communities/moving-h2020
https://zenodo.org/communities/invid-h2020
https://zenodo.org/communities/eu
https://doi.org/
info:eu-repo/semantics/openAccess
Creative Commons Attribution 4.0 International
https://creativecommons.org/licenses/by/4.0/legalcode
ICIP 2016, IEEE International Conference on Image Processing, Phoenix Cinvention Center, Phoenix, Arizona 85004 USA, 25-28 September 2016
Video aesthetic quality assessment
Rules of photography and cinematography
Support vector machine
Video representation uncertainty
VIDEO AESTHETIC QUALITY ASSESSMENT USING KERNEL SUPPORT VECTOR MACHINE WITH ISOTROPIC GAUSSIAN SAMPLE UNCERTAINTY (KSVM-IGSU)
info:eu-repo/semantics/conferencePoster
oai:zenodo.org:1462432
2020-01-24T19:26:25Z
user-moving-h2020
openaire_data
user-eu
D. Galanopoulos
V. Mezaris
2018-10-15
<p>We provide a large-scale lecture video dataset consisting of artificially-generated lectures, and the corresponding ground-truth fragmentation, for the purpose of evaluating lecture video fragmentation techniques.</p>
<p>For creating this dataset, 1498 speech transcript files (generated automatically by ASR software) were used from the world's biggest academic online video repository, the VideoLectures.NET. These transcripts correspond to lectures from various fields of science, such as Computer science, Mathematics, Medicine, Politics etc. In order to create the synthetic video lectures, all transcripts were randomly split in fragments, the duration of which ranges between 4 and 8 minutes. Each synthetic lecture was then assembled by combining (stitching) exactly 20 randomly selected fragments. 300 such artificially-generated lectures are included in the released dataset. Each such lecture file has a mean duration of about 120 minutes, thus the dataset contains altogether about 600 hours of artificially-generated lectures. Every pair of consecutive fragments in these lectures originally comes from different videos, consequently the point in time where such two fragments are joined is a known ground-truth fragment boundary. All these boundaries form the dataset's ground truth. We should stress that we do not generate the corresponding video files for the artificially-generated lectures (only the transcripts), and one should not try to reverse-engineer the dataset creation process so as to use in some way the visual modality for detecting the fragments in this dataset.</p>
<p><strong>File format</strong></p>
<p>After you download the provided .zip and unpack it, the extracted folder will contain two sub-folders:</p>
<pre><code>1. ALV_srt
2. ALV_srt_GT
</code></pre>
<p>Each of them contains 300 files.</p>
<p>The <strong>ALV_srt</strong> folder contains the transcripts of every artificially-generated lecture, in the standard SRT format:</p>
<pre><code>1. A numeric counter identifying each sequential subtitle
2. The time that the subtitle should appear on the screen, followed by --> and the time it should disappear
3. Subtitle's text itself on one or more lines
4. A blank line containing no text
</code></pre>
<p>The <strong>ALV_srt_GT</strong> folder contains the ground truth (GT) fragments corresponding to the lectures (transcripts) of the <strong>ALV_srt</strong> folder. Each GT file consists of 3 tab-separated columns and 20 rows, in the following format:</p>
<pre><code><Fragment_ID_1> <StartTime_1> <EndTime_1>
<Fragment_ID_2> <StartTime_2> <EndTime_2>
<Fragment_ID_3> <StartTime_3> <EndTime_3>
.
.
.
<Fragment_ID_20> <StartTime_20> <EndTime_20>
</code></pre>
<p>Each row indicates a fragment. The first column indicates the ID of a fragment while the second and the third column indicate the start and the end time of the fragment respectively.</p>
<p><strong>License and Citation</strong></p>
<p>This dataset is provided for academic, non-commercial use only. If you find this dataset useful in your work, please cite the following publication where the dataset is introduced:</p>
<p><em>D. Galanopoulos, V. Mezaris, “Temporal Lecture Video Fragmentation using Word Embeddings”, Proc. 25th Int. Conf. on Multimedia Modeling (MMM2019), Thessaloniki, Greece, Jan. 2019.</em></p>
<p><strong>Acknowledgements</strong></p>
<p>This work was supported by the EU’s Horizon 2020 research and innovation programme under grant agreement No 693092 MOVING. We are grateful to JSI/VideoLectures.NET for providing the lectures’ transcripts.</p>
https://doi.org/10.5281/zenodo.1462432
oai:zenodo.org:1462432
Zenodo
https://zenodo.org/communities/moving-h2020
https://zenodo.org/communities/eu
https://doi.org/10.5281/zenodo.1462431
info:eu-repo/semantics/openAccess
Creative Commons Attribution Share Alike 4.0 International
https://creativecommons.org/licenses/by-sa/4.0/legalcode
Artificially-generated Lecture Video Fragmentation Dataset and Ground Truth
info:eu-repo/semantics/other
oai:zenodo.org:2548392
2020-01-24T19:24:57Z
user-moving-h2020
openaire_data
Sven Lüdeke
Till Blume
Ansgar Scherp
2019-01-25
<p>The results from our experiments.</p>
https://doi.org/10.5281/zenodo.2548392
oai:zenodo.org:2548392
Zenodo
https://zenodo.org/communities/moving-h2020
https://doi.org/10.5281/zenodo.2548391
info:eu-repo/semantics/openAccess
GNU General Public License v2.0 only
https://www.gnu.org/licenses/old-licenses/gpl-2.0-standalone.html
IMPULSE: Integrate Public Metadata Underneath professional Library SErvices
info:eu-repo/semantics/other
oai:zenodo.org:159236
2020-01-20T16:36:05Z
user-moving-h2020
user-invid-h2020
user-eu
Christos Tzelepis
Eftichia Mavridaki
Vasileios Mezaris
Ioannis Patras
2016-09-25
<p>In this paper we propose a video aesthetic quality assessment method that combines the representation of each video according to a set of photographic and cinematographic rules, with the use of a learning method that takes the video representation's uncertainty into consideration. Specifically, our method exploits the information derived from both low- and high-level analysis of video layout, leading to a photo- and motion-based video representation scheme. Subsequently, a kernel Support Vector Machine (SVM) extension, the KSVM-iGSU, is trained to classify the videos and retrieve those of high aesthetic value. Experimental results on our large dataset verify the effectiveness of the proposed method. We also make publicly available our dataset, in order to facilitate research in the area of video aesthetic quality assessment.</p>
https://doi.org/10.1109/ICIP.2016.7532791
oai:zenodo.org:159236
Zenodo
https://zenodo.org/communities/moving-h2020
https://zenodo.org/communities/invid-h2020
https://zenodo.org/communities/eu
info:eu-repo/semantics/openAccess
Creative Commons Attribution 4.0 International
https://creativecommons.org/licenses/by/4.0/legalcode
ICIP 2016, IEEE International Conference on Image Processing, Phoenix Cinvention Center, Phoenix, Arizona 85004 USA, 25-28 September 2016
Video aesthetic quality assessment
Rules of photography and cinematography
Support vector machine
Video representation uncertainty
VIDEO AESTHETIC QUALITY ASSESSMENT USING KERNEL SUPPORT VECTOR MACHINE WITH ISOTROPIC GAUSSIAN SAMPLE UNCERTAINTY (KSVM-IGSU)
info:eu-repo/semantics/conferencePaper
oai:zenodo.org:61399
2020-01-20T14:10:59Z
user-moving-h2020
openaire
Nishioka, Chifumi
Scherp, Ansgar
2016-06-22
<p>So far it is unclear how different factors of a scientific publication recommender system based on users' tweets have an influence on the recommendation performance. We examine three different factors, namely profiling method, temporal decay, and richness of content. Regarding profiling, we compare CF-IDF that replaces terms in TF-IDF by semantic concepts, HCF-IDF as novel hierarchical variant of CF-IDF, and topic modeling. As temporal decay functions, we apply sliding window and exponential decay. In terms of the richness of content, we compare recommendations using both full-texts and titles of publications and using only titles. Overall, the three factors make twelve recommendation strategies. We have conducted an online experiment with 123 participants and compared the strategies in a within-group design. The best recommendations are achieved by the strategy combining CF-IDF, sliding window, and with full-texts. However, the strategies using the novel HCF-IDF profiling method achieve similar results with just using the titles of the publications. Therefore, HCF-IDF can make recommendations when only short and sparse data is available.</p>
https://doi.org/10.5281/zenodo.61399
oai:zenodo.org:61399
Zenodo
https://zenodo.org/communities/moving-h2020
https://doi.org/
info:eu-repo/semantics/openAccess
Creative Commons Attribution 4.0 International
https://creativecommons.org/licenses/by/4.0/legalcode
Profiling vs. Time vs. Content: What Does Matter for Top-k Publication Recommendation Based on Twitter Profiles?
info:eu-repo/semantics/lecture
oai:zenodo.org:2563204
2020-01-20T16:29:31Z
user-moving-h2020
openaire
user-eu
Günther, Franziska
2019-01-24
<p>This presentation was held on 24th of January within the lecture "Research Analytics" which is regulary organized by the Saxon State and University Library Dresden. The aim was to present the MOOC "Science 2.0 and open research methods" to researchers and librarians.</p>
https://doi.org/10.5281/zenodo.2563204
oai:zenodo.org:2563204
deu
Zenodo
https://zenodo.org/communities/moving-h2020
https://zenodo.org/communities/eu
https://doi.org/10.5281/zenodo.2563203
info:eu-repo/semantics/openAccess
Creative Commons Attribution 4.0 International
https://creativecommons.org/licenses/by/4.0/legalcode
MOOC, Open Science, Science 2.0, Open research methods
Wissenschaft 2.0 und offene Forschungsmethoden vermitteln– Der MOOC "Science 2.0 and open research methods"
info:eu-repo/semantics/lecture
oai:zenodo.org:1421610
2020-01-20T14:48:11Z
user-moving-h2020
openaire
user-eu
Yu He
2018-09-03
<p>Presentation given for doctoral consortium at the EC-TEL conference (EUROPEAN CONFERENCE ON TECHNOLOGY ENHANCED LEARNING) - Lifelong technology enhanced learning: Dealing with the complexity of 21st century challenges, Leeds (UK), 03 - 06 September 2018</p>
https://doi.org/10.5281/zenodo.1421610
oai:zenodo.org:1421610
Zenodo
https://zenodo.org/communities/moving-h2020
https://zenodo.org/communities/eu
https://doi.org/10.5281/zenodo.1421609
info:eu-repo/semantics/openAccess
Creative Commons Attribution 4.0 International
https://creativecommons.org/licenses/by/4.0/legalcode
Inferring knowledge acquisition through Web navigation behaviour
info:eu-repo/semantics/lecture
oai:zenodo.org:572268
2020-01-20T12:05:01Z
user-moving-h2020
openaire
user-eu
Skulimowski, Andrzej M.J.
Skulimowski, Andrzej M.J.
2016-06-27
<p>Abstract: This paper presents an application of roadmapping methodology to establish a strategic plan for an innovative knowledge repository providing dynamically updated economic information, online courses and other data. The repository will fulfill relevant information provision tasks for a few target communities of users, including financial auditors and academia. The diversified scope of planned actions contributed to a higher degree of complexity of strategy planning optimization. This was the reason to choose roadmapping as a framework for the decision-support and selection process. First, we will present an extension of the roadmapping methodology that will allow to solve the above planning problem. It turns out that – as far as the financial criteria are concerned - the real options are a natural and useful tool to describe the relations between different variants of the roadmapping objects deployment plans corresponding to the planning scenarios and the associated financial yields. Moreover, we will show that the financial valuation of operation plans may be easily combined with the SWOTC (SWOT with Challenges) assessment of roadmapping objects. Rights gained during the repository operation as well as the liabilities can be modeled by long or short real-option positions, respectively. The iterative dependence of future investment opportunities on previous outcomes will be modeled by nested real options, and embedded into an anticipatory network that allows to model the expected consequences of implementing an operation strategy. The provision of new content and services on the platform will be modelled as an innovation development and market placement problem (NPD-MP). The latter is a dynamic four-criteria problem with options-enhanced NPV (ENPV) – aggregating subordinated momentary financial performance criteria, options- affected risk (ER), social impact index (SII), and the Strategic Position Index (SPI). The multicriteria optimization problem so arisen will be solved during an interactive group decision procedure with the roadmapping methodology. As a final result, we will provide an example of a building an exploitation strategy for the digital platform established within the H2020 project MOVING.</p>
https://doi.org/10.5281/zenodo.572268
oai:zenodo.org:572268
Zenodo
https://zenodo.org/communities/moving-h2020
https://zenodo.org/communities/eu
https://doi.org/
info:eu-repo/semantics/openAccess
Creative Commons Attribution 4.0 International
https://creativecommons.org/licenses/by/4.0/legalcode
ICMFII, 6th International Conference on Multidimesional Finance, Insurance and Investment, Alcoy (Spain), June 26-29, 2016
Digital knowledge platform
Strategic planning
Technological roadmapping
Multicriteria decision making
Anticipatory networks
Real options
Including Financial Criteria in the Strategic Planning of Knowledge Repository Operation
info:eu-repo/semantics/lecture
oai:zenodo.org:809939
2020-01-20T14:45:15Z
user-moving-h2020
openaire
user-eu
Damianos Galanopoulos
Foteini Markatopoulou
Vasileios Mezaris
Ioannis Patras
2017-06-06
<p>Zero-example event detection is a problem where, given an event query as input but no example videos for training a detector, the system retrieves the most closely related videos. In this paper we present a fully-automatic zero-example event detection method that is based on translating the event description to a predefined set of concepts for which previously trained visual concept detectors are available. We adopt the use of Concept Language Models (CLMs), which is a method of augmenting semantic concept definition, and we propose a new concept-selection method for deciding on the appropriate number of the concepts needed to describe an event query. The proposed system achieves state-of-the-art performance in automatic zero-example event detection.</p>
https://doi.org/10.5281/zenodo.809939
oai:zenodo.org:809939
Zenodo
https://zenodo.org/communities/moving-h2020
https://zenodo.org/communities/eu
https://doi.org/10.5281/zenodo.809938
info:eu-repo/semantics/openAccess
Creative Commons Attribution 4.0 International
https://creativecommons.org/licenses/by/4.0/legalcode
Zero-example event detection
Concept Language Models (CLMs)
Concept Language Models and Event-based Concept Number Selection for Zero-example Event Detection
info:eu-repo/semantics/conferencePoster
oai:zenodo.org:2862549
2020-01-20T15:08:47Z
user-moving-h2020
Abdel-Qader, Mohammad
Vagliano, Iacopo
Scherp, Ansgar
2019-05-16
<p>Reusing terms results in a Network of Linked vOcabularies (NeLO), where the nodes are the vocabularies that use at least one term from some other vocabulary and thus depend on each other. These dependencies become a problem when vocabularies in the network change, e. g., when terms are deprecated or deleted. In these cases, all dependent vocabularies in the network need to be updated. So far, there has been no study that analyzes vocabulary changes in NeLO over time. To address this shortcoming, we compute the state of NeLO from the available versions of the vocabularies over 17 years. We analyze static parameters of NeLO such as its size, density, average degree, and the most important vocabularies at certain points in time. We further investigate how NeLO changes over time. Specifically, we measure the impact of a change in one vocabulary to others, how the reuse of terms changes, and the importance of vocabularies changes. Our analyses provide for the first time in-depth insights into the structure and evolution of NeLO. This study helps ontology engineers to identify shortcomings of the data modeling and to assess the dependencies implied with reusing a specific vocabulary.</p>
This is the author's version of the work. It is posted here for your personal use. Not for redistribution. The definitive Version of Record was published in Bakaev M., Frasincar F., Ko IY. (eds) Web Engineering. ICWE 2019. Lecture Notes in Computer Science, vol 11496. Springer, Cham, https://doi.org/10.1007/978-3-030-19274-7_29.
https://doi.org/10.1007/978-3-030-19274-7_29
oai:zenodo.org:2862549
eng
Zenodo
https://zenodo.org/communities/moving-h2020
info:eu-repo/semantics/openAccess
Creative Commons Attribution 4.0 International
https://creativecommons.org/licenses/by/4.0/legalcode
Linked Data
Vocabularies
Ontologies
Evolution over time
Network analysis
Analyzing the Evolution of Linked Vocabularies
info:eu-repo/semantics/conferencePaper
oai:zenodo.org:2539272
2020-01-20T17:34:38Z
user-moving-h2020
Damianos Galanopoulos
Vasileios Mezaris
2019-01-08
<p>In this work the problem of temporal video lecture fragmentation in meaningful parts is addressed. The visual content of lecture video can not be effectively used for this task due to its extremely homogeneous content. A new method for lecture video fragmentation in which only automatically generated speech transcripts of a video are exploited, is proposed. Contrary to previously proposed works that employ visual, audio and textual features and use time-consuming supervised methods which require annotated training data, we present a method that analyses the transcripts’ text with the help of word embeddings that are generated from pre-trained state-of-the-art neural networks. Furthermore,we address a major problem of video lecture fragmentation research, which is the lack of large-scale datasets for evaluation, by presenting a new artificially- generated dataset of synthetic video lecture transcripts that we make publicly available. Experimental comparisons document the merit of the proposed approach.</p>
https://doi.org/10.5281/zenodo.2539272
oai:zenodo.org:2539272
Zenodo
https://zenodo.org/communities/moving-h2020
https://doi.org/10.5281/zenodo.2539271
info:eu-repo/semantics/openAccess
Creative Commons Attribution 4.0 International
https://creativecommons.org/licenses/by/4.0/legalcode
MMM 2019, 25th International Conference on MultiMedia Modeling, Thessaloniki, Greece, 08-11 January 2019
Lecture Video Fragmentation,
Word Embeddings
Video Segmentation
Temporal Lecture Video Fragmentation using Word Embeddings
info:eu-repo/semantics/conferencePaper
oai:zenodo.org:1285591
2020-01-20T16:37:07Z
user-moving-h2020
Böschen, Falk
Beck, Tilman
Scherp, Ansgar
2018-06-02
<p>Different approaches have been proposed in the past to address the challenge of extracting text from scholarly figures. However, until recently, no comparative evaluation of the different approaches had been conducted. Thus, we performed an extensive study of the related work and evaluated in total 32 different approaches. In this work, we perform a more detailed comparison of the 7 most relevant approaches described in the literature and extend to 37 systematic linear combinations of methods for extracting text from scholarly figures. Our generic pipeline, consisting of six steps, allows us to freely combine the different possible methods and perform a fair comparison. Overall, we have evaluated 44 different linear pipeline configurations and systematically compared the different methods. We then derived two non-linear configurations and a two-pass approach. We evaluate all pipeline configurations over four datasets of scholarly figures of different origin and characteristics. The quality of the extraction results is assessed using F-measure and Levenshtein distance, and we measure the runtime performance. Our experiments showed that there is a linear configuration that overall shows the best text extraction quality on all datasets. Further experiments showed that the best configuration can be improved by extending it to a two-pass approach. Regarding the runtime, we observed huge differences from very fast approaches to those running for several weeks. Our experiments found the best working configuration for text extraction from our method set. However, they also showed that further improvements regarding region extraction and classification are needed.</p>
https://doi.org/10.1007/s11042-018-6162-7
oai:zenodo.org:1285591
eng
Zenodo
https://doi.org/10.1007/978-3-319-51811-4_2
https://zenodo.org/communities/moving-h2020
info:eu-repo/semantics/openAccess
Creative Commons Attribution 4.0 International
https://creativecommons.org/licenses/by/4.0/legalcode
Multimedia Tools and Applications, (2018-06-02)
Scholarly Figures
Text Extraction
Comparison
Survey and empirical comparison of different approaches for text extraction from scholarly figures
info:eu-repo/semantics/article
oai:zenodo.org:2873195
2020-01-24T19:23:36Z
user-moving-h2020
openaire_data
Nishioka, Chifumi
Scherp, Ansgar
2019-05-16
<p>This is the evaluation result from the experiment of the paper: C. Nishioka and A. Scherp "Profiling vs. Time vs. Content: What does Matter for Top-k Publication Recommendation based on Twitter Profiles?", JCDL, 2016. The paper reported the experiment of the scientific publication recommender system. The dataset contains evaluations of the recommended scientific publications made by anonymized 123 participants in the experiment.</p>
<p>Please refer to <a href="http://dx.doi.org/10.7802/1224">http://dx.doi.org/10.7802/1224</a> for a detailed descriptions of the dataset.</p>
https://doi.org/10.7802/1224
oai:zenodo.org:2873195
eng
Zenodo
https://zenodo.org/communities/moving-h2020
info:eu-repo/semantics/openAccess
Creative Commons Attribution 4.0 International
https://creativecommons.org/licenses/by/4.0/legalcode
EconBizRecSys evaluation result
info:eu-repo/semantics/other
oai:zenodo.org:1135093
2020-01-20T16:49:25Z
user-moving-h2020
user-eu
Ansgar Scherp
Vasileios Mezaris
Thomas Köhler
Alexander Hauptmann
2017-10-25
<p>Educational and Knowledge Technologies (EdTech), especially in connection to multimedia content and the vision of mobile and personalized learning, is a hot topic in both academia and the business start-ups ecosystem. The driver and enabler of this is on the one side the development and widespread availability of multimedia materials and MOOCs, which represent multimedia content produced specifically for supporting e-learning; and, on the other side, the ever increasing availability of all sorts on information on the Internet and in social media channels (e. g. lectures, research papers, user-generated videos, news items), which, despite not directly targeting e-learning, can prove to be valuable complements to the more targeted learning materials. Although the availability of such content is not a problem these days, finding the right content and associating different relevant pieces of multimedia so as to enable a comprehensive learning experience on a chosen subject is by no means a trivial task. This workshop provides research in areas related to multimedia-based educational and knowledge technologies and particularly on the use of multimedia search and retrieval, analysis and understanding, browsing, summarization, recommendation, and visualization technologies on multimedia content available in specialized learning platforms, the Web, mobile devices and/or social networks for supporting personalized and adaptive e-learning and training.</p>
https://doi.org/10.1145/3123266.3132056
oai:zenodo.org:1135093
eng
Zenodo
https://zenodo.org/communities/moving-h2020
https://zenodo.org/communities/eu
info:eu-repo/semantics/openAccess
Creative Commons Attribution 4.0 International
https://creativecommons.org/licenses/by/4.0/legalcode
ACMMM17, 25th ACM International Conference on Multimedia, Mountain View, CA, USA, 25-27 October 2017
Educational technologies
EdTech
Knowledge technologies
Multimedia
MultiEdTech 2017: 1st International Workshop on Multimedia-based Educational and Knowledge Technologies for Personalized and Social Online Training
info:eu-repo/semantics/conferencePaper
oai:zenodo.org:1195866
2020-01-20T16:12:53Z
user-moving-h2020
user-eu
Fessl, Angela
Pammer, Viktoria
Wiese, Michael
Thalmann, Stefan
2017-09-12
<p>Financial auditors routinely search internal as well as public knowledge bases as part of the auditing process. Efficient search strategies are crucial for knowledge workers in general and for auditors in particular. Modern search technology quickly evolves; and features beyond keyword search like fac-etted search or visual overview of knowledge bases like graph visualisations emerge. It is therefore desirable for auditors to learn about new innovations and to explore and experiment with such technologies. In this paper, we present a reflection intervention concept that intends to nudge auditors to reflect on their search behaviour and to trigger informal learning in terms of by trying out new or less frequently used search features. The reflection intervention concept has been tested in a focus group with six auditors using a mockup. Foremost, the discussion centred on the timing of reflection interventions and how to raise mo-tivation to achieve a change in search behaviour.</p>
https://doi.org/10.5281/zenodo.1195866
oai:zenodo.org:1195866
eng
Zenodo
https://zenodo.org/communities/moving-h2020
https://zenodo.org/communities/eu
https://doi.org/10.5281/zenodo.1195865
info:eu-repo/semantics/openAccess
Creative Commons Attribution 4.0 International
https://creativecommons.org/licenses/by/4.0/legalcode
Reflection Intervention, Reflective Learning, Social Comparison, Search Behaviour Improvement.
Improving Search Strategies of Auditors – A Focus Group on Reflection Interventions
info:eu-repo/semantics/other
oai:zenodo.org:55801
2020-01-20T16:45:23Z
user-moving-h2020
user-invid-h2020
user-eu
Tzelepis, Christos
Ma, Zhigang
Mezaris, Vasileios
Ionescu, Bogdan
Kompatsiaris, Ioannis
Boato, Giulia
Sebe, Nicu
Yan, Shuicheng
2016-05-13
<p>Research on event-based processing and analysis of media is receiving an increasing attention from the scientific community due to its relevance for an abundance of applications, from consumer video management and video surveillance to lifelogging and social media. Events have the ability to semantically encode relationships of different informational modalities, such as visual-audio-text, time, involved agents and objects, with the spatio-temporal component of events being a key feature for contextual analysis. This unveils an enormous potential for exploiting new information sources and opening new research directions. In this paper, we survey the existing literature in this field. We extensively review the employed conceptualization of the notion of event in multimedia, the techniques for event representation and modeling, the feature representation and event inference approaches for the problems of event detection in audio, visual, and textual content. Furthermore, we review some key event-based multimedia applications, and various benchmarking activities that provide solid frameworks for measuring the performance of different event processing and analysis systems. We provide an in-depth discussion of the insights obtained from reviewing the literature and identify future directions and challenges.</p>
https://doi.org/10.1016/j.imavis.2016.05.005
oai:zenodo.org:55801
Zenodo
https://zenodo.org/communities/moving-h2020
https://zenodo.org/communities/invid-h2020
https://zenodo.org/communities/eu
info:eu-repo/semantics/openAccess
Creative Commons Attribution Non Commercial No Derivatives 4.0 International
https://creativecommons.org/licenses/by-nc-nd/4.0/legalcode
Image and Vision Computing, (2016-05-13)
Event-based media processing and analysis
event conceptualization
event representation and modeling
multimedia event detection
event-based applications and benchmarking
survey of the literature
Event-based Media Processing and Analysis: A Survey of the Literature
info:eu-repo/semantics/article
oai:zenodo.org:1285833
2020-01-20T17:21:23Z
user-moving-h2020
Abdel-Qader, Mohammad
Scherp, Ansgar
Vagliano, Iacopo
2018-06-03
<p>Vocabularies are used for modeling data in Knowledge Graphs (KGs) like the Linked Open Data Cloud and Wikidata. During their lifetime, vocabularies are subject to changes. New terms are coined, while existing terms are modified or deprecated. We first quantify the amount and frequency of changes in vocabularies. Subsequently, we investigate to which extend and when the changes are adopted in the evolution of KGs. We conduct our experiments on three large-scale KGs: the Billion Triples Challenge datasets, the Dynamic Linked Data Observatory dataset, and Wikidata. Our results show that the change frequency of terms is rather low, but can have high impact due to the large amount of distributed graph data on the web. Furthermore, not all coined terms are used and most of the deprecated terms are still used by data publishers. The adoption time of terms coming from different vocabularies ranges from very fast (few days) to very slow (few years). Surprisingly, we could observe some adoptions before the vocabulary changes were published. Understanding the evolution of vocabulary terms is important to avoid wrong assumptions about the modeling status of data published on the web, which may result in difficulties when querying the data from distributed sources.</p>
https://doi.org/10.5281/zenodo.1285833
oai:zenodo.org:1285833
Zenodo
https://doi.org/10.1007/978-3-319-93417-4_1
https://zenodo.org/communities/moving-h2020
https://doi.org/10.5281/zenodo.1285832
info:eu-repo/semantics/openAccess
Creative Commons Attribution Non Commercial 4.0 International
https://creativecommons.org/licenses/by-nc/4.0/legalcode
ESWC, 15th European Semantic Web Conference, Heraklion, Crete, Greece, 3-7 June, 2018
Vocabulary changes, Terms adoption, Deprecated terms, Wikidata, DyLDO, BTC
Analyzing the Evolution of Vocabulary Terms and Their Impact on the LOD Cloud
info:eu-repo/semantics/conferencePaper
oai:zenodo.org:2843851
2020-01-24T19:26:22Z
user-moving-h2020
openaire_data
Abdel-Qader, Mohammad
Vagliano, Iacopo
Scherp, Ansgar
2019-05-15
<p>Reusing terms in the <a href="https://lod-cloud.net/">Linked Open Data cloud</a> results in a <a href="https://sites.google.com/view/nelo-evolution">Network of Linked vOcabularies (NeLO)</a>, where the nodes are the vocabularies that use at least one term from some other vocabulary and thus depend on each other. These dependencies become a problem when vocabularies in the network change, e.g., when terms are deprecated or deleted. In these cases, all dependent vocabularies in the network need to be updated. To address this shortcoming, we compute the state of NeLO from the available versions of the vocabularies in a period of time of over 17 years.</p>
<p>Specifically, we provide the following statistics for each vocabulary in each year from 2001 and 2018 in three different formats (RDF/XML, JSON-LD, and CSV):</p>
<ul>
<li>in-degree;</li>
<li>out-degree;</li>
<li>degree;</li>
<li>eccentricity;</li>
<li>closeness centrality;</li>
<li>harmonic closeness centrality;</li>
<li>betweenness centrality;</li>
<li>Authority;</li>
<li>Hub;</li>
<li>PageRank.</li>
</ul>
<p>The publication in the reference contains also further information on the analyzed vocabularies and on the methodology.</p>
<p>This dataset is provided for non-commercial use only. If you find this dataset useful in your work, please cite the publication in the reference.</p>
https://doi.org/10.5281/zenodo.2843851
oai:zenodo.org:2843851
eng
Zenodo
https://zenodo.org/communities/moving-h2020
https://doi.org/10.5281/zenodo.2843850
info:eu-repo/semantics/openAccess
Creative Commons Attribution Non Commercial Share Alike 4.0 International
https://creativecommons.org/licenses/by-nc-sa/4.0/legalcode
Linked Data
Network Analysis
Vocabularies Evolution over time
Ontologies
Statistics of the Network of Linked Vocabularies
info:eu-repo/semantics/other
oai:zenodo.org:1170039
2020-01-20T17:13:01Z
user-moving-h2020
Böschen, Falk
Strobel, Benjamin
Goos, Steffen
Liebers, Christoph
Rathje, Bastian
Scherp, Ansgar
2017-03-11
<p>Bar charts are widely used to visualize core results of experiments in research papers or display statistics in news, media, and other reports. However, visualizations like bar charts are mostly manually designed, static presentations of data without the option of adaption to a user's needs. But so far, it is unknown whether interactivity improves the understanding of charts. In this work, we compare static with dynamic bar charts, which offer an interactive stacking option. We assess the efficiency, effectiveness, and satisfaction when answering questions regarding the content of a bar chart. An eye-tracker is used to measure the efficiency. We have conducted a between-group experiment with 38 participants. While one group had to solve the aggregation tasks using stackable, i. e., interactive bar charts, the other group was limited to static visualizations. Even though new interactive features require familiarization, we found that the stacking feature significantly helps completing the tasks with respect to efficiency, effectiveness, and satisfaction for bar charts of varying complexity.</p>
https://doi.org/10.1145/3020165.3022147
oai:zenodo.org:1170039
eng
Zenodo
https://zenodo.org/communities/moving-h2020
info:eu-repo/semantics/openAccess
Creative Commons Attribution 4.0 International
https://creativecommons.org/licenses/by/4.0/legalcode
Interactive Bar Charts
Stacking
User Study
Eye-Tracking
Evaluation of the Comprehensiveness of Bar Charts with and without Stacking Functionality using Eye-Tracking
info:eu-repo/semantics/conferencePaper
oai:zenodo.org:571111
2020-01-20T17:45:29Z
user-moving-h2020
user-eu
Angela Fessl
Stefan Thalmann
Viktoria Pammer-Schindler
2017-04-06
<p>Supporting users’ training and thus improving their working productivity by increasing their search performance is crucial in every day’s work-life. This holds true not only for knowledge workers like auditors, who needs to stay-up-to-date with law and new compliance regulations, but also for production workers who need to improve their IT skills in order to keep pace with new technologies established within the ongoing digitalisation in Industry 4.0 (Kleindienst et al., 2016). In both settings, there is a substantial need for workers to find work-related relevant information at the right time. To achieve this, especially production workers with low IT literacy as well as auditors, who need to keep track on the continuous change of laws and rules, need informal learning opportunities on how to improve their search capabilities during their daily work. In this work, we present a first design concept of an adaptive and reflective training support widget. The widget aims at supporting workers to train new search functionalities in order to enhance their search productivity during work.</p>
This publication was an extended abstract.
https://doi.org/10.5281/zenodo.571111
oai:zenodo.org:571111
Zenodo
https://zenodo.org/communities/moving-h2020
https://zenodo.org/communities/eu
https://doi.org/
info:eu-repo/semantics/openAccess
Creative Commons Attribution 4.0 International
https://creativecommons.org/licenses/by/4.0/legalcode
adaptive training support
reflection intervention
reflection guidance
industry 4.0
Adaptive and Reflective Training Support for Improving Search Behaviour in Industry 4.0
info:eu-repo/semantics/other
oai:zenodo.org:2621207
2020-01-24T19:25:04Z
user-moving-h2020
openaire_data
user-eu
Aitor Apaolaza
2019-04-02
<p>This resource contains information about the laboratory study carried out as an initial evaluation of the MOVING platform.</p>
<p>It contains the information gathered from the questionnaires, and qualitative notes taken during the study for two different use cases. The design of the study and the results of the analysis can be found in the deliverable 1.3 "Initial evaluation, updated requirements and specifications"</p>
<p><a href="http://moving-project.eu/deliverables/">http://moving-project.eu/deliverables/</a></p>
https://doi.org/10.5281/zenodo.2621207
oai:zenodo.org:2621207
eng
Zenodo
https://zenodo.org/communities/moving-h2020
https://zenodo.org/communities/eu
https://doi.org/10.5281/zenodo.2621206
info:eu-repo/semantics/openAccess
Creative Commons Attribution 4.0 International
https://creativecommons.org/licenses/by/4.0/legalcode
log data
Initial evaluation of the MOVING platform
info:eu-repo/semantics/other
oai:zenodo.org:1455214
2020-01-20T16:34:34Z
user-moving-h2020
Vagliano, Iacopo
Galke, Lukas
Florian, Mai
Scherp, Ansgar
2018-10-02
<p> </p>
<p>The task of automatic playlist continuation is generating a list of recommended tracks that can be added to an existing playlist. By suggesting appropriate tracks, i.e., songs to add to a playlist, a recommender system can increase the user engagement by making playlist creation easier, as well as extending listening beyond the end of the current playlist. The ACM Recommender Systems Challenge 2018 focuses on such a task. Spotify released a dataset of playlists, which includes a large number of playlists and associated track listings. Given a set of playlists from which a number of tracks have been withheld, the goal is predicting the missing tracks in those playlists. We participated in the challenge as the team <em>Unconscious Bias</em> and, in this paper, we present our approach. We extend adversarial autoencoders to the problem of automatic playlist continuation. We show how multiple input modalities, such as the playlist titles as well as track titles, artists and albums, can be incorporated in the playlist continuation task.</p>
© Iacopo Vagliano | ACM 2018. This is the author's version of the work. It is posted here for your personal use. Not for redistribution. The definitive Version of Record was published in RecSys Challenge '18 - Proceedings of the ACM Recommender Systems Challenge 2018, http://doi.org/10.1145/3267471.3267476.
https://doi.org/10.1145/3267471.3267476
oai:zenodo.org:1455214
eng
Zenodo
https://zenodo.org/communities/moving-h2020
info:eu-repo/semantics/openAccess
Creative Commons Attribution 4.0 International
https://creativecommons.org/licenses/by/4.0/legalcode
music recommender systems
neural networks
adversarial autoencoders
multi-modal recommender
automatic playlist continuation
Using Adversarial Autoencoders for Multi-Modal Automatic Playlist Continuation
info:eu-repo/semantics/conferencePaper
oai:zenodo.org:579935
2020-01-20T17:07:22Z
user-moving-h2020
openaire
Köhler, Thomas
Pscheida, Daniela
Scherp, Ansgar
Koschtial, Claudia
Felden, Carsten
Neumann, Jörg
2016-02-29
<p>Authors will present and discuss how open an innovation training platform is both: a working environment for the quality analyis of online data collections with data mining methods and a training environment with information, learning and exchange offers for digital information management. </p>
https://doi.org/10.5281/zenodo.579935
oai:zenodo.org:579935
Zenodo
https://zenodo.org/communities/moving-h2020
https://doi.org/
info:eu-repo/semantics/openAccess
Creative Commons Attribution 4.0 International
https://creativecommons.org/licenses/by/4.0/legalcode
18th GENERAL ONLINE RESEARCH CONFERENCE, Dresden, 2-4 March 2016
open innovation, eScience, research methodology, digital information management
MOVING RESEARCH METHODOLOGY TOWARD ESCIENCE
info:eu-repo/semantics/lecture
oai:zenodo.org:61520
2020-01-20T16:13:44Z
user-moving-h2020
user-eu
Scherp, Ansgar
Pscheida, Daniela
Wiese, Michael
Nishioka, Chifumi
Köhler, Thomas
Maas, Annalouise
Collyda, Chrysa
Mezaris, Vasileios
2016-09-05
<p>MOVING investigates how to enable people from all societal sectors to fundamentally improve their information literacy by training how to use, choose, reflect and evaluate data/text mining methods in connection with their daily research tasks. We believe that an extensive distribution of this type of information literacy education in the sense of a data-savvy information professional will have a decisive impact on the innovative capacity of the European society.</p>
https://doi.org/10.5281/zenodo.61520
oai:zenodo.org:61520
Zenodo
https://zenodo.org/communities/moving-h2020
https://zenodo.org/communities/eu
https://doi.org/
info:eu-repo/semantics/openAccess
Creative Commons Attribution Share Alike 4.0 International
https://creativecommons.org/licenses/by-sa/4.0/legalcode
ESWC, European Semantic Web Conference, Crete, Greece
EU networking session
semantic search
video exploration
Science 2.0
MOVING: Training Towards a Society of Data-savvy Information Professionals
info:eu-repo/semantics/conferencePaper
oai:zenodo.org:162404
2020-01-20T16:59:25Z
user-moving-h2020
user-invid-h2020
user-eu
Markatopoulou, Foteini
Mezaris, Vasileios
Patras, Ioannis
2016-10-17
<p>In this work we propose a method that integrates multi-task learning (MTL) and deep learning. Our method appends a MTL-like loss to a deep convolutional neural network, in order to learn the relations between tasks together at the same time, and also incorporates the label correlations between pairs of tasks. We apply the proposed method on a transfer learning scenario, where our objective is to fine-tune the parameters of a network that has been originally trained on a large-scale image dataset for concept detection, so that it be applied on a target video dataset and a corresponding new set of target concepts. We evaluate the proposed method for the video concept detection problem on the TRECVID 2013 Semantic Indexing dataset. Our results show that the proposed algorithm leads to better concept-based video annotation than existing state-of-the-art methods.</p>
https://doi.org/10.1145/2964284.2967271
oai:zenodo.org:162404
Zenodo
https://zenodo.org/communities/moving-h2020
https://zenodo.org/communities/invid-h2020
https://zenodo.org/communities/eu
info:eu-repo/semantics/openAccess
Creative Commons Attribution 4.0 International
https://creativecommons.org/licenses/by/4.0/legalcode
ACM Multimedia, Amsterdam, October 2016
Concept detection; deep learning; video analysis
Deep Multi-task Learning with Label Correlation Constraint for Video Concept Detection
info:eu-repo/semantics/conferencePaper
oai:zenodo.org:809680
2020-01-20T16:47:48Z
user-moving-h2020
user-invid-h2020
user-eu
Galanopoulos, Damianos
Markatopoulou, Foteini
Mezaris, Vasileios
Patras, Ioannis
2017-06-08
<p>Zero-example event detection is a problem where, given an event query as input but no example videos for training a detector, the system retrieves the most closely related videos. In this paper we present a fully-automatic zero-example event detection method that is based on translating the event description to a predefined set of concepts for which previously trained visual concept detectors are available. We adopt the use of Concept Language Models (CLMs), which is a method of augmenting semantic concept definition, and we propose a new concept-selection method for deciding on the appropriate number of the concepts needed to describe an event query. The proposed system achieves state-of-the-art performance in automatic zero-example event detection.</p>
https://doi.org/10.1145/3078971.3079043
oai:zenodo.org:809680
Zenodo
https://zenodo.org/communities/moving-h2020
https://zenodo.org/communities/invid-h2020
https://zenodo.org/communities/eu
info:eu-repo/semantics/openAccess
Creative Commons Attribution 4.0 International
https://creativecommons.org/licenses/by/4.0/legalcode
ICMR 2017, ACM International Conference on Multimedia Retrieval 2017, Bucharest, Romania, 6-9 June 2017
Zero-example multimedia event detection
Video search
Query representation
Concept Language Models and Event-based Concept Number Selection for Zero-example Event Detection
info:eu-repo/semantics/conferencePaper
oai:zenodo.org:1143955
2020-01-20T17:31:27Z
user-moving-h2020
user-eu
Galke, Lukas Paul Achatius
Mai, Florian
Schelten, Alan
Brunsch, Dennis
Scherp, Ansgar
2017-12-06
<p>We conduct the first systematic comparison of automated semantic<br>
annotation based on either the full-text or only on the title metadata<br>
of documents. Apart from the prominent text classification baselines<br>
kNN and SVM, we also compare recent techniques of Learning<br>
to Rank and neural networks and revisit the traditional methods<br>
logistic regression, Rocchio, and Naive Bayes. Across three of our<br>
four datasets, the performance of the classifications using only titles<br>
reaches over 90% of the quality compared to the performance when<br>
using the full-text.</p>
https://doi.org/10.1145/3148011.3148039
oai:zenodo.org:1143955
eng
Zenodo
https://arxiv.org/abs/1705.05311
https://zenodo.org/communities/moving-h2020
https://zenodo.org/communities/eu
info:eu-repo/semantics/openAccess
Creative Commons Attribution 4.0 International
https://creativecommons.org/licenses/by/4.0/legalcode
K-CAP 2017, Ninth International Conference on Knowledge Capture, Austin, Texas, 04-06 December 2017
Multi-label classification
Document analysis
Semantic annotation
Using Titles vs. Full-text as Source for Automated Semantic Document Annotation
info:eu-repo/semantics/conferencePaper
oai:zenodo.org:1157832
2020-01-20T17:35:07Z
user-moving-h2020
Vagliano, Iacopo
Monti, Diego
Morisio, Maurizio
2018-01-23
<p>Traditionally, recommender systems exploit user ratings to infer preferences. However, the growing popularity of social platforms has encouraged users to write textual reviews about liked items. These reviews represent a valuable source of non-trivial information that could improve users' decision processes. In this paper we propose a novel recommendation approach based on the semantic annotation of entities mentioned in user reviews and on the knowledge available in the Web of Data. We compared our recommender system with two baseline algorithms and a state-of-the-art Linked Data based approach. Our system provided more diverse recommendations with respect to the other techniques considered, while obtaining a better accuracy than the Linked Data based method.</p>
https://doi.org/10.5281/zenodo.1157832
oai:zenodo.org:1157832
Zenodo
https://zenodo.org/communities/moving-h2020
https://doi.org/10.5281/zenodo.1157831
info:eu-repo/semantics/openAccess
Creative Commons Attribution 4.0 International
https://creativecommons.org/licenses/by/4.0/legalcode
Poster-Recsys 2017, Poster Proceeding of ACM Recsys 2017, Como, Italy, 28 August 2017
SemRevRec: A Recommender System based on User Reviews and Linked Data
info:eu-repo/semantics/other
oai:zenodo.org:1409662
2020-01-20T16:53:26Z
user-moving-h2020
Beck, Tilman
Böschen, Falk
Scherp, Ansgar
2018-08-07
<p>The vast amount of scientific literature poses a challenge when one is trying to understand a previously unknown topic. Selecting a representative subset of documents that covers most of the desired content can solve this challenge by presenting the user a small subset of documents. We build on existing research on representative subset extraction and apply it in an information retrieval setting. Our document selection process consists of three steps: computation of the document representations, clustering, and selection of documents. We implement and compare two different document representations, two different clustering algorithms, and three different selection methods using a coverage and a redundancy metric. We execute our 36 experiments on two datasets, with 10 sample queries each, from different domains. The results show that there is no clear favorite and that we need to ask the question whether coverage and redundancy are sufficient for evaluating representative subsets.</p>
https://doi.org/10.1007/978-3-319-99133-7_19
oai:zenodo.org:1409662
eng
Zenodo
https://zenodo.org/communities/moving-h2020
info:eu-repo/semantics/openAccess
Creative Commons Attribution 4.0 International
https://creativecommons.org/licenses/by/4.0/legalcode
Representative document selection
Document clustering
What to Read Next? Challenges and Preliminary Results in Selecting Representative Documents
info:eu-repo/semantics/conferencePaper
oai:zenodo.org:2553811
2020-01-24T19:24:55Z
user-moving-h2020
openaire_data
Sven Lüdeke
Till Blume
Ansgar Scherp
2019-01-25
<p>This repository contains the results of our experimental evaluation conducted with IMPULSE (<a href="https://github.com/t-blume/impulse">https://github.com/t-blume/impulse</a>). The datasets are subsets of the Billion Triple Challenge Dataset 2014 (<a href="http://km.aifb.kit.edu/projects/btc-2014/">http://km.aifb.kit.edu/projects/btc-2014/</a>) containing only bibliographic metadata.</p>
https://doi.org/10.5281/zenodo.2553811
oai:zenodo.org:2553811
Zenodo
https://zenodo.org/communities/moving-h2020
https://doi.org/10.5281/zenodo.2548391
info:eu-repo/semantics/openAccess
GNU General Public License v2.0 only
https://www.gnu.org/licenses/old-licenses/gpl-2.0-standalone.html
IMPULSE: Integrate Public Metadata Underneath professional Library SErvices
info:eu-repo/semantics/other
oai:zenodo.org:2583130
2020-01-20T16:47:34Z
user-moving-h2020
user-eu
Galke, Lukas
Gerstenkorn, Gunnar
Scherp, Ansgar
2018-09-06
<p>We analyze the problem of response suggestion in a closed domain along a real-world scenario of a digital library. We present a text-processing pipeline to generate question-answer pairs from chat transcripts. On this limited amount of training data, we compare retrieval-based, conditioned-generation, and dedicated representation learning approaches for response suggestion. Our results show that retrieval-based methods that strive to find similar, known contexts are preferable over parametric approaches from the conditioned-generation family, when the training data is limited. We, however, identify a specific representation learning approach that is competitive to the retrieval-based approaches despite the training data limitation.</p>
This is a post-peer-review, pre-copyedit version of a paper published in Elloumi M, Granitzer M, Hameurlain A, Seifert C, Stein
B, Tjoa A & Wagner R (eds.) Database and Expert Systems Applications. DEXA 2018. Communications in Computer and
Information Science, 903. The final authenticated version is available online at: https://doi.org/10.1007/978-3-319-99133-7_18
https://doi.org/10.1007/978-3-319-99133-7_18
oai:zenodo.org:2583130
eng
Zenodo
https://zenodo.org/communities/moving-h2020
https://zenodo.org/communities/eu
info:eu-repo/semantics/openAccess
Creative Commons Attribution 4.0 International
https://creativecommons.org/licenses/by/4.0/legalcode
DEXA 2018, Database and Expert Systems Applications - DEXA 2018 International Workshops, BDMICS, BIOKDD, and TIR, Regensburg, Germany, 3-6 September 2018
conversational agents
neural networks
representation learning
A Case Study of Closed-Domain Response Suggestion with Limited Training Data
info:eu-repo/semantics/conferencePaper
oai:zenodo.org:2600798
2020-01-20T13:47:14Z
user-moving-h2020
user-eu
Ahmed Saleh
Ansgar Scherp
2018-11-17
<pre>Knowing what is increasing in popularity is important to researchers, news organizations, auditors, government entities and more. In particular, knowledge of trending topics provides us with information about what people are attracted to and what they think is noteworthy. Yet detecting trending topics from a set of texts is a difficult task, requiring detectors to learn trending patterns while simultaneously making predictions.</pre>
<pre>In this paper, we propose a deep learning model architecture for the challenging task of trend detection and forecasting. The model architecture aims to learn and attend to the trending values' patterns. Our preliminary results show that our model detects the trending topics with a high accuracy. </pre>
This is the author's version of the work. It is posted here for your personal use, not for redistribution. The definitive Version of Record was published in the proceedings of the IEEE International Conference on Data Mining Workshops (ICDMW), https://doi.org/10.1109/ICDMW.2018.00222.
https://doi.org/10.1109/ICDMW.2018.00222
oai:zenodo.org:2600798
eng
Zenodo
https://zenodo.org/communities/moving-h2020
https://zenodo.org/communities/eu
info:eu-repo/semantics/openAccess
Creative Commons Attribution 4.0 International
https://creativecommons.org/licenses/by/4.0/legalcode
ICDMW, IEEE International Conference on Data Mining Workshops, 17-19 November 2018
Deep learning
Attend2trend: Attention Model for Real-Time Detecting and Forecasting of Trending Topics
info:eu-repo/semantics/conferencePaper
oai:zenodo.org:1135049
2020-01-20T17:21:52Z
user-moving-h2020
user-eu
Christos Tzelepis
Vasileios Mezaris
Ioannis Patras
2017-12-29
<p>In this paper, we propose a maximum margin classifier that deals with uncertainty in data input. More specifically, we reformulate the SVM framework such that each training example can be modeled by a multi-dimensional Gaussian distribution described by its mean vector and its covariance matrix -- the latter modeling the uncertainty. We address the classification problem and define a cost function that is the expected value of the classical SVM cost when data samples are drawn from the multi-dimensional Gaussian distributions that form the set of the training examples. Our formulation approximates the classical SVM formulation when the training examples are isotropic Gaussians with variance tending to zero. We arrive at a convex optimization problem which we solve efficiently in the primal form using a stochastic gradient descent approach. The resulting classifier, which we name SVM with Gaussian Sample Uncertainty (SVM-GSU), is tested on synthetic data and five publicly available and popular datasets; namely, the MNIST, WDBC, DEAP, TV News Channel Commercial Detection, and TRECVID MED datasets. Experimental results verify the effectiveness of the proposed method.</p>
https://doi.org/10.1109/TPAMI.2017.2772235
oai:zenodo.org:1135049
eng
Zenodo
https://zenodo.org/communities/moving-h2020
https://zenodo.org/communities/eu
info:eu-repo/semantics/openAccess
Creative Commons Attribution 4.0 International
https://creativecommons.org/licenses/by/4.0/legalcode
IEEE Transactions on Pattern Analysis and Machine Intelligence, (2017-12-29)
Classification
Convex optimization
Gaussian anisotropic uncertainty
Large margin methods
Learning with uncertainty
Statistical learning theory
Linear Maximum Margin Classifier for Learning from Uncertain Data
info:eu-repo/semantics/article
oai:zenodo.org:61386
2020-01-20T16:17:06Z
user-moving-h2020
user-eu
Nishioka, Chifumi
Scherp, Ansgar
2016-05-29
<p>The Linked Open Data (LOD) cloud is expanding continuously. Entities appear, change, and disappear over time. However, relatively little is known about the dynamics of the entities, i. e., the characteristics of their temporal evolution. In this paper, we employ clustering techniques over the dynamics of entities to determine common temporal patterns. We define an entity as RDF resource together with its attached RDF types and properties. The quality of the clusterings is evaluated using entity features such as the entities’ properties, RDF types, and pay-level domain. In addition, we investigate to what extend entities that share a feature value change together over time. As dataset, we use weekly LOD snapshots over a period of more than three years provided by the Dynamic Linked Data Observatory. Insights into the dynamics of entities on the LOD cloud has strong practical implications to any application requiring fresh caches of LOD. The range of applications is from determining crawling strategies for LOD, caching SPARQL queries, to programming against LOD, and recommending vocabularies for reusing LOD vocabularies. </p>
https://doi.org/10.5281/zenodo.61386
oai:zenodo.org:61386
Zenodo
https://zenodo.org/communities/moving-h2020
https://zenodo.org/communities/eu
https://doi.org/
info:eu-repo/semantics/openAccess
Creative Commons Attribution 4.0 International
https://creativecommons.org/licenses/by/4.0/legalcode
PROFILES, Proceedings of the 3rd International Workshop on Dataset PROFIling and fEderated Search for Linked Data ('16) co-located with the 13th ESWC 2016 Conference, Anissaras, Greece, 30 May 2016
Information-theoretic Analysis of Entity Dynamics on the Linked Open Data Cloud
info:eu-repo/semantics/conferencePaper
oai:zenodo.org:438641
2020-01-24T19:26:18Z
user-moving-h2020
openaire_data
Galanopoulos, Damianos
Markatopoulou, Foteini
Mezaris, Vasileios
2017-03-28
<p>We provide concept detection scores for the MED16train dataset which is used at the TRECVID Multimedia Event Detection (MED) task [1]. First, each video is decoded into a set of keyframes at fixed temporal intervals (2 keyframes per second). Then, we calculated concept detection scores for the two following concept sets: i) 487 sport-related concepts from YouTube Sports-1M Dataset[1] and ii) 345 TRECVID SIN concepts [3]. The scores have been generated as follows:<br>
1) For the 487 concepts for the Sports-1M Dataset, a Googlenet network [4] originally trained on 5055 ImageNet concepts was fine-tuned, following the extension strategy of [2] with one extension layer of dimension 128.<br>
2) For the 345 TRECVID SIN concepts, a pre-trained Googlenet network [4] on 5055 ImageNet concepts was fine-tuned on these concepts, again following the extension strategy of [2] with one extension layer of dimension 1024. </p>
<p>After unpacking the compressed file two different folders can be found, namely "Prob_sports_MED16train" and "Prob_SIN_MED16train", one for each concept set. We provide one file for every video of the MED16train dataset for each concept set. Each file consists of N columns (where N = 345 for TRECVID SIN and N = 487 for Sports-1M Dataset) and M rows (where M is the number of extracted keyframes for the corresponding video). Each column corresponds to a different concept, with all concept scores being in the range [0,1]. The higher the score the more likely that the corresponding concept appears in the keyframe. Two additional files are provided; files "sports_487_Classes.txt" and "SIN_345_Classes.txt" indicate the order of the concepts that is used in the concept score files.</p>
<p>[1] A. Karpathy, G. Toderici, S. Shetty, T. Leung, R. Sukthankar and L. Fei-Fei, "Large-scale video classification with convolutional neural networks", In Proceedings of the IEEE conference on Computer Vision and Pattern Recognition, pp. 1725-1732, 2014.<br>
[2] N. Pittaras, F. Markatopoulou, V. Mezaris and I. Patras, "Comparison of Fine-tuning and Extension Strategies for Deep Convolutional Neural Networks", Proc. 23rd Int. Conf. on MultiMedia Modeling (MMM'17), Reykjavik, Iceland, Springer LNCS vol. 10132, pp. 102-114, Jan. 2017.<br>
[3] G. Awad, C. Snoek, A. Smeaton, and G. Quénot, "TRECVid semantic indexing of video: a 6-year retrospective", ITE Transactions on Media Technology and Applications, 4 (3). pp. 187-208, 2016.<br>
[4] C. Szegedy, Wei Liu, Yangqing Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V. Vanhoucke and A. Rabinovich, "Going deeper with convolutions", In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1-9, 2015.</p>
Linked publications: (1) N. Pittaras, F. Markatopoulou, V. Mezaris, I. Patras, "Comparison of Fine-tuning and Extension Strategies for Deep Convolutional Neural Networks", Proc. 23rd Int. Conf. on MultiMedia Modeling (MMM'17), Reykjavik, Iceland, Jan. 2017 (2) F. Markatopoulou, A. Moumtzidou, D. Galanopoulos, T. Mironidis, V. Kaltsa, A. Ioannidou, S. Symeonidis, K. Avgerinakis, S. Andreadis, I. Gialampoukidis, S. Vrochidis, A. Briassouli, V. Mezaris, I. Kompatsiaris, I. Patras, "ITI-CERTH participation to TRECVID 2016", In TRECVID 2016 Workshop, Gaithersburg, MD, USA, 2016.
https://doi.org/10.5281/zenodo.438641
oai:zenodo.org:438641
Zenodo
https://zenodo.org/communities/moving-h2020
https://doi.org/
info:eu-repo/semantics/openAccess
Creative Commons Attribution 4.0 International
https://creativecommons.org/licenses/by/4.0/legalcode
TRECVID MED task
Multimedia event detection
concept detection
event detection
video analysis
Concept detection scores for the MED16train dataset (TRECVID MED task)
info:eu-repo/semantics/other
oai:zenodo.org:2539291
2020-01-20T17:24:38Z
user-moving-h2020
Stelios Andreadis
Anastasia Moumtzidou
Damianos Galanopoulos
Foteini Markatopoulou
Konstantinos Apostolidis
Thanassis Mavropoulos
Ilias Gialampoukidis
Stefanos Vrochidis
Vasileios Mezaris
Ioannis Kompatsiaris
Ioannis Patras
2019-01-08
<p>This paper presents VERGE, an interactive video retrieval engine that enables browsing and searching into video content. The system implements various retrieval modalities, such as visual or textual search, concept detection and clustering, as well as a multimodal fusion and a reranking capability. All results are displayed in a graphical user interface in an efficient and friendly manner.</p>
https://doi.org/10.5281/zenodo.2539291
oai:zenodo.org:2539291
Zenodo
https://zenodo.org/communities/moving-h2020
https://doi.org/10.5281/zenodo.2539290
info:eu-repo/semantics/openAccess
Creative Commons Attribution 4.0 International
https://creativecommons.org/licenses/by/4.0/legalcode
MMM 2019, 25th International Conference on MultiMedia Modeling, Thessaloniki, Greece, 08-11 January 2019
VERGE in VBS 2019
info:eu-repo/semantics/conferencePaper
oai:zenodo.org:1286796
2020-01-20T16:34:42Z
user-moving-h2020
user-eu
Mai, Florian
Galke, Lukas
Scherp, Ansgar
2018-06-11
<p>For (semi-)automated subject indexing systems in digital libraries, it is often more practical to use metadata such as the title of a publication instead of the full-text or the abstract. Therefore, it is desirable to have good text mining and text classification algorithms that operate well already on the title of a publication. So far, the classification performance on titles is not competitive with the performance on the full-texts if the same number of training samples is used for training. However, it is much easier to obtain title data in large quantities and to use it for training than full-text data. In this paper, we investigate the question how models obtained from training on increasing amounts of title training data compare to models from training on a constant number of full-texts. We evaluate this question on a large-scale dataset from the medical domain (PubMed) and from economics (EconBiz). In these datasets, the titles and annotations of millions of publications are available, and they outnumber the available full-texts by a factor of 20 and 15, respectively. To exploit these large amounts of data to their full potential, we develop three strong deep learning classifiers and evaluate their performance on the two datasets. The results are promising. On the EconBiz dataset, all three classifiers outperform their full-text counterparts by a large margin. The best title-based classifier outperforms the best full-text method by 9.4%. On the PubMed dataset, the best title-based method almost reaches the performance of the best full-text classifier, with a difference of only 2.9%.</p>
https://doi.org/10.1145/3197026.3197039
oai:zenodo.org:1286796
eng
ACM
https://zenodo.org/communities/moving-h2020
https://zenodo.org/communities/eu
info:eu-repo/semantics/openAccess
Creative Commons Attribution Non Commercial 4.0 International
https://creativecommons.org/licenses/by-nc/4.0/legalcode
JCDL '18, 18th ACM/IEEE on Joint Conference on Digital Libraries, Fort Worth, Texas, USA, 3-6 June 2018
Text Classification
Deep Learning
Digital Libraries
Using Deep Learning for Title-Based Semantic Subject Indexing to Reach Competitive Performance to Full-Texts
info:eu-repo/semantics/conferencePaper
oai:zenodo.org:61521
2020-01-20T16:14:35Z
user-moving-h2020
user-eu
Abdel-Qader, Mohammad
Scherp, Ansgar
2016-07-19
<p>We analyse the evolution of vocabularies on the Linked Open Data cloud. Based on the recent statistics of the LOD cloud, we have selected the twelve most dominant vocabularies in terms of their use in different pay-level domains. The number of versions we found for these vocabularies range between 2 to 11. While some ontologies exist for more than 10 years (e.g., FOAF) others are only online since a few years (like DCAT). Our analysis shows that many changes occurred on annotation properties. This reflects a need for more clarification of the terms, especially at early versions of the vocabularies. The majority of changes in the vocabularies are due to changes in other, imported vocabularies. Thus, there is a co-evolution of different vocabularies. This insight has practical impacts to ontology engineers. They not only need to consider the evolution of the vocabularies they directly use, but also those they import and indirectly depend on.</p>
https://doi.org/10.5281/zenodo.61521
oai:zenodo.org:61521
Zenodo
https://zenodo.org/communities/moving-h2020
https://zenodo.org/communities/eu
https://doi.org/
info:eu-repo/semantics/openAccess
Creative Commons Attribution Share Alike 4.0 International
https://creativecommons.org/licenses/by-sa/4.0/legalcode
PROFILES, 3rd International Workshop on Dataset PROFIling and fEderated Search for Linked Data, Crete, Greece
LOD analysis
evolution
Linked Open Data
Qualitative Analysis of Vocabulary Evolution on the Linked Open Data Cloud
info:eu-repo/semantics/conferencePaper
oai:zenodo.org:2640890
2020-01-20T17:34:02Z
user-moving-h2020
Yu
2018-09-03
<p>As we witness the growing popularity of online learning, we address the problem of knowing if users are actually learning. The traditional assessment approaches involve tests, assignments and peer assessments. We explore if there is a way to measure learning and personalise the user learning experience in an unobtrusive manner. My PhD proposes using data-driven methods to measure learning by mining user interaction data to identify regularities that could be indicators of learning.</p>
https://doi.org/10.5281/zenodo.2640890
oai:zenodo.org:2640890
Zenodo
https://zenodo.org/communities/moving-h2020
https://doi.org/10.5281/zenodo.2640889
info:eu-repo/semantics/openAccess
Creative Commons Attribution 4.0 International
https://creativecommons.org/licenses/by/4.0/legalcode
Navigation behaviour
Knowledge acquisition
MOOC
Inferring knowledge acquisition through Web navigation behaviours
info:eu-repo/semantics/other
oai:zenodo.org:2547476
2020-01-20T17:15:28Z
user-moving-h2020
user-eu
Saleh, Ahmed
Beck, Tilman
Galke, Lukas
Scherp, Ansgar
2018-11-15
<pre>While there are many studies on information retrieval models using full-text, there are presently no comparison studies of full-text retrieval vs. retrieval only over the titles of documents. On the one hand, the full-text of documents like scientific papers is not always available due to, e.,g., copyright policies of academic publishers. </pre>
<pre>On the other hand, conducting a search based on titles alone has strong limitations. Titles are short and therefore may not contain enough information to yield satisfactory search results. In this paper, we compare different retrieval models regarding their search performance on the full-text vs. only titles of documents. </pre>
<pre>We use different datasets, including the three digital library datasets: EconBiz, IREON, and PubMed. The results show that it is possible to build effective title-based retrieval models that provide competitive results comparable to full-text retrieval. The difference between the average evaluation results of the best title-based retrieval models is only 3% less than those of the best full-text-based retrieval models. </pre>
<pre> </pre>
This is the author's version of the work. It is posted here for your personal use, not for redistribution. The definitive Version of Record was published in the proceedings of the International Conference on Asian Digital Libraries ICADL 2018, https://doi.org/10.1007/978-3-030-04257-8_30.
https://doi.org/10.1007/978-3-030-04257-8_30
oai:zenodo.org:2547476
eng
Zenodo
https://zenodo.org/communities/moving-h2020
https://zenodo.org/communities/eu
info:eu-repo/semantics/openAccess
Creative Commons Attribution 4.0 International
https://creativecommons.org/licenses/by/4.0/legalcode
ICADL 2018, International Conference on Asian Digital Libraries, Hamilton, New Zealand, 19-22 November 2018
Information Retrieval
Learning to Rank
Deep Learning
Performance Comparison of Ad-hoc Retrieval Models over Full-text vs. Titles of Documents
info:eu-repo/semantics/conferencePaper
oai:zenodo.org:2539283
2020-01-20T16:48:52Z
user-moving-h2020
I. Vagliano
A. Fessl
F. Guenther
T. Koehler
V. Mezaris
A. Saleh
A. Scherp
I. Simic
2019-01-08
<p>The MOVING platform enables its users to improve their information literacy by training how to exploit data and text mining methods in their daily research tasks. In this paper, we show how it can support researchers in various tasks, and we introduce its main features, such as text and video retrieval and processing, advanced visualizations, and the technologies to assist the learning process.</p>
https://doi.org/10.5281/zenodo.2539283
oai:zenodo.org:2539283
Zenodo
https://zenodo.org/communities/moving-h2020
https://doi.org/10.5281/zenodo.2539282
info:eu-repo/semantics/openAccess
Creative Commons Attribution 4.0 International
https://creativecommons.org/licenses/by/4.0/legalcode
MMM 2019, 25th International Conference on MultiMedia Modeling, Thessaloniki, Greece, 08-11 January 2019
Technology enhanced learning
Information retrieval
Text and video analysis
Recommender systems
Training Researchers with the MOVING Platform
info:eu-repo/semantics/conferencePaper
oai:zenodo.org:1135101
2020-01-20T17:03:40Z
user-moving-h2020
user-eu
Wenxuan Mou
Christos Tzelepis
Vasileios Mezaris
2017-06-30
<p>Automatic understanding and analysis of groups has attracted increasing attention in the vision and multimedia communities in recent years. However, little attention has been paid to the automatic analysis of group membership - i.e., recognizing which group the individual in question is part of. This paper presents a novel two-phase Support Vector Machine (SVM) based specific recognition model that is learned using an optimized generic recognition model. We conduct a set of experiments using a database collected to study group analysis from multimodal cues while each group (i.e., four participants together) were watching a number of long movie segments. Our experimental results show that the proposed specific recognition model (52%) outperforms the generic recognition model trained across all different videos (35%) and the independent recognition model trained directly on each specific video (33%) using linear SVM.</p>
https://doi.org/10.1109/FG.2017.69
oai:zenodo.org:1135101
eng
Zenodo
https://zenodo.org/communities/moving-h2020
https://zenodo.org/communities/eu
info:eu-repo/semantics/openAccess
Creative Commons Attribution 4.0 International
https://creativecommons.org/licenses/by/4.0/legalcode
FG 2017, 12th IEEE International Conference on Automatic Face & Gesture Recognition, Washington, DC, USA, 30-3 June 2017
Generic to Specific Recognition Models for Membership Analysis in Group Videos
info:eu-repo/semantics/conferencePaper
oai:zenodo.org:3135194
2020-01-20T17:33:09Z
user-moving-h2020
openaire
Thomas Koehler
Koehler, Thomas
Skulimovski, Andrzej
2019-05-22
<p>Creativity stimulation and creativity support systems (CSS) have been rapidly gaining importance in learning and research activities. However, the overall outcome is still lagging behind the other areas of creativity studies, even though the strong influence of recent developments in the context of digitization and Web 2.0 have been discussed in scholarly publications.<br>
In this paper, the authors intend to link a creative research scenario recommendation and its impacts on research and innovation processes with the previous results regarding creativity stimulation in online learning systems and e-science platforms. The corresponding open innovation platform is currently being developed as part of a trans-European Horizon 2020 flagship project.<br>
Even though scientists’ awareness of the respective potential of digitization and Web 2.0 technologies is still insufficient, the first underlying assumption for this research is that future recommendation engines designed for open learning and innovation platforms will be capable of measuring users’ creative engagementin the decision making process.</p>
<p>The corresponding open innovation platform is recently developed as part of a trans-European Horizon 2020 flagship Project. The authors are grateful for the support of the Research project “Training towards a society of data-savvy Information professionals to enable open leadership Innovation” (MOVING) financed by the EC, contract No. 693092.</p>
Proceedings published biy the KICSS 2017 are of non-public character and only available to the conference participants.
https://doi.org/10.5281/zenodo.3135194
oai:zenodo.org:3135194
eng
Zenodo
https://zenodo.org/communities/moving-h2020
https://doi.org/10.5281/zenodo.3135193
info:eu-repo/semantics/openAccess
Creative Commons Attribution 4.0 International
https://creativecommons.org/licenses/by/4.0/legalcode
KICSS2017, 12th International Conference on Knowledge, Information and Creativity Support Systems 2017, Nagoya, Japan, 09.-11.11.2017
digitization, online creativity, e-science scenarios, global expert systems, open innovation platforms, recommenders
Increasing research creativity through open innovation platforms.
info:eu-repo/semantics/lecture
oai:zenodo.org:456320
2020-01-20T16:38:05Z
user-moving-h2020
user-eu
Günther, Franziska
Heinz, Matthias
Fessl, Angela
Günther, Franziska
Heinz, Matthias
Fessl, Angela
2017-03-31
<p>This extended abstract describes the initial framework of the learning environment of the MOVING platform. The aim of the MOVING platform is among others to train workers to use data and text-mining methods to improve their information literacy, which is a core competency in industry 4.0. This framework combines working and training on the platform with formal and informal learning options to reach this aim. </p>
https://doi.org/10.5281/zenodo.456320
oai:zenodo.org:456320
Zenodo
https://zenodo.org/communities/moving-h2020
https://zenodo.org/communities/eu
https://doi.org/10.5281/zenodo.794545
info:eu-repo/semantics/openAccess
Creative Commons Zero v1.0 Universal
https://creativecommons.org/publicdomain/zero/1.0/legalcode
Professionelles Wissensmanagement, Karlsruhe, Germany, April 5-7, 2017
learning environment, information literacy, industry 4.0
Moving the Industry 4.0
info:eu-repo/semantics/conferencePaper
oai:zenodo.org:1157871
2020-01-20T17:26:54Z
user-moving-h2020
Vagliano, Iacopo
Monti, Diego
Scherp, Ansgar
Morisio, Maurizio
2018-01-23
<p>Nowadays, most recommender systems exploit user-provided ratings to infer their preferences. However, the growing popularity of social and e-commerce websites has encouraged users to also share comments and opinions through textual reviews. In this paper, we introduce a new recommendation approach which exploits the semantic annotation of user reviews to extract useful and non-trivial information about the items to recommend. It also relies on the knowledge freely available in the Web of Data, notably in DBpedia and Wikidata, to discover other resources connected with the annotated entities. We evaluated our approach in three domains, using both DBpedia and Wikidata. The results showed that our solution provides a better ranking than another recommendation method based on the Web of Data, while it improves in novelty with respect to traditional techniques based on ratings.</p>
Extended technical report available at https://arxiv.org/abs/1709.09973
https://doi.org/10.1145/3148011.3148035
oai:zenodo.org:1157871
Zenodo
https://zenodo.org/communities/moving-h2020
info:eu-repo/semantics/openAccess
Creative Commons Attribution Non Commercial 4.0 International
https://creativecommons.org/licenses/by-nc/4.0/legalcode
K-CAP 2017, K-CAP 2017 Proceedings of the Knowledge Capture Conference, Austin, TX, USA, 4-6 December 2017
DBpedia, Linked Data, Recommender Systems, Semantic Annotation, Semantic Web, User Reviews, Web of Data, Wikidata
Content Recommendation through Semantic Annotation of User Reviews and Linked Data
info:eu-repo/semantics/conferencePaper
oai:zenodo.org:809700
2020-01-20T17:04:56Z
user-moving-h2020
user-emma-h2020
user-invid-h2020
user-eu
Collyda, Chrysa
Apostolidis, Evlampios
Pournaras, Alexandros
Markatopoulou, Foteini
Mezaris, Vasileios
Patras, Ioannis
2017-06-08
<p>This paper presents the VideoAnalysis4ALL tool that supports the automatic fragmentation and concept-based annotation of videos, and the exploration of the annotated video fragments through an interactive user interface. The developed web application decomposes the video into two different granularities, namely shots and scenes, and annotates each fragment by evaluating the existence of a number (several hundreds) of high-level visual concepts in the keyframes extracted from these fragments. Through the analysis the tool enables the identification and labeling of semantically coherent video fragments, while its user interfaces allow the discovery of these fragments with the help of human-interpretable concepts. The integrated state-of-the-art video analysis technologies perform very well and, by exploiting the processing capabilities of multi-thread / multi-core architectures, reduce the time required for analysis to approximately one third of the video’s duration, thus making the analysis three times faster than real-time processing.</p>
https://doi.org/10.1145/3078971.3079015
oai:zenodo.org:809700
Zenodo
https://zenodo.org/communities/moving-h2020
https://zenodo.org/communities/invid-h2020
https://zenodo.org/communities/emma-h2020
https://zenodo.org/communities/eu
info:eu-repo/semantics/openAccess
Creative Commons Attribution 4.0 International
https://creativecommons.org/licenses/by/4.0/legalcode
ICMR 2017, ACM International Conference on Multimedia Retrieval 2017, Bucharest, Romania, 6-9 June 2017
Web-based on-line video analysis
Video segmentation
Video annotation
Video content exploration
VideoAnalysis4ALL: An On-line Tool for the Automatic Fragmentation and Concept-based Annotation, and the Interactive Exploration of Videos
info:eu-repo/semantics/conferencePaper
oai:zenodo.org:1255610
2020-01-20T16:47:26Z
user-moving-h2020
Backes, Tobias
2018-06-03
<p>This work addresses the problem of author name homonymy in the Web of Science. Aiming for an efficient, simple and straightforward solution, we introduce a novel probabilistic similarity measure for author name disambiguation based on feature overlap. Using the researcher-ID available for a subset of the Web of Science, we evaluate the application of this measure in the context of agglomeratively clustering author mentions. We focus on a concise evaluation that shows clearly for which problem setups and at which time during the clustering process our approach works best. In contrast to most<br>
other works in this field, we are skeptical towards the performance of author name disambiguation methods in general and compare our approach to the trivial single-cluster baseline. Our results are presented separately for each correct clustering size as we can explain that, when treating all cases together, the trivial baseline and more sophisticated approaches are hardly distinguishable in terms of evaluation results. Our model shows state-of-the-art performance for all correct clustering sizes without any discriminative training and with tuning only one convergence parameter.</p>
https://doi.org/10.1145/3197026.3197036
oai:zenodo.org:1255610
eng
Zenodo
https://zenodo.org/communities/moving-h2020
info:eu-repo/semantics/openAccess
Creative Commons Attribution 4.0 International
https://creativecommons.org/licenses/by/4.0/legalcode
JCDL '18, The 18th ACM/IEEE Joint Conference on Digital Libraries, Fort Worth, TX, USA, June 3–7, 2018
Author Disambiguation
Probabilities
Agglomerative Clustering
Effective Unsupervised Author Disambiguation with Relative Frequencies
info:eu-repo/semantics/conferencePaper
oai:zenodo.org:1313577
2020-01-20T17:31:26Z
user-moving-h2020
Galke, Lukas
Mai, Florian
Vagliano, Iacopo
Scherp, Ansgar
2018-07-11
<p>We present multi-modal adversarial autoencoders for recommendation and evaluate them on two different tasks: citation recommendation and subject label recommendation. We analyze the effects of adversarial regularization, sparsity, and different input modalities. By conducting 408 experiments, we show that adversarial regularization consistently improves the performance of autoencoders for recommendation. We demonstrate, however, that the two tasks differ in the semantics of item co-occurrence in the sense that item co-occurrence resembles relatedness in case of citations, yet implies diversity in case of subject labels. Our results reveal that supplying the partial item set as input is only helpful, when item co-occurrence resembles relatedness. When facing a new recommendation task it is therefore crucial to consider the semantics of item co-occurrence for the choice of an appropriate model.</p>
© Lukas Galke | ACM 2018. This is the author's version of the work. It is posted here for your personal use. Not for redistribution. The definitive Version of Record was published in UMAP '18- Proceedings of the 26th Conference on User Modeling, Adaptation and Personalization, http://dx.doi.org/10.1145/3209219.3209236.
https://doi.org/10.1145/3209219.3209236
oai:zenodo.org:1313577
eng
Zenodo
https://zenodo.org/communities/moving-h2020
info:eu-repo/semantics/openAccess
Creative Commons Attribution 4.0 International
https://creativecommons.org/licenses/by/4.0/legalcode
UMAP 2018, User Modeling, Adaptation and Personalization, Singapore, Singapore, 8-11 July, 2018
Recommender Systems
Neural Networks
Learning from implicit feedback
Adversarial Autoencoders
Multi-modal
Sparsity
Multi-Modal Adversarial Autoencoders for Recommendations of Citations and Subject Labels
info:eu-repo/semantics/conferencePaper
oai:zenodo.org:61391
2020-01-20T17:11:26Z
user-moving-h2020
Nishioka, Chifumi
Scherp, Ansgar
2016-06-19
<p>So far it is unclear how different factors of a scientific publication recommender system based on users' tweets have an influence on the recommendation performance. We examine three different factors, namely profiling method, temporal decay, and richness of content. Regarding profiling, we compare CF-IDF that replaces terms in TF-IDF by semantic concepts, HCF-IDF as novel hierarchical variant of CF-IDF, and topic modeling. As temporal decay functions, we apply sliding window and exponential decay. In terms of the richness of content, we compare recommendations using both full-texts and titles of publications and using only titles. Overall, the three factors make twelve recommendation strategies. We have conducted an online experiment with 123 participants and compared the strategies in a within-group design. The best recommendations are achieved by the strategy combining CF-IDF, sliding window, and with full-texts. However, the strategies using the novel HCF-IDF profiling method achieve similar results with just using the titles of the publications. Therefore, HCF-IDF can make recommendations when only short and sparse data is available.</p>
https://doi.org/10.1145/2910896.2910898
oai:zenodo.org:61391
Zenodo
https://zenodo.org/communities/moving-h2020
info:eu-repo/semantics/openAccess
Creative Commons Attribution 4.0 International
https://creativecommons.org/licenses/by/4.0/legalcode
JCDL, 16th ACM/IEEE-CS on Joint Conference on Digital Libraries, Newark, NJ, USA, June 19-23 2016
recommender system
social media
user profiling
Profiling vs. Time vs. Content: What Does Matter for Top-k Publication Recommendation Based on Twitter Profiles?
info:eu-repo/semantics/conferencePaper
oai:zenodo.org:1143963
2020-01-20T16:29:53Z
user-moving-h2020
Galke, Lukas Paul Achatius
Saleh, Ahmed
Scherp, Ansgar
2017-09-29
<p>We assess the suitability of word embeddings for practical information retrieval scenarios.<br>
Thus, we assume that users issue ad-hoc short queries where we return the first twenty retrieved<br>
documents after applying a boolean matching operation between the query and the documents. We<br>
compare the performance of several techniques that leverage word embeddings in the retrieval models<br>
to compute the similarity between the query and the documents, namely word centroid similarity,<br>
paragraph vectors, Word Mover’s distance, as well as our novel inverse document frequency (IDF)<br>
re-weighted word centroid similarity. We evaluate the performance using the ranking metrics mean<br>
average precision, mean reciprocal rank, and normalized discounted cumulative gain. Additionally,<br>
we inspect the retrieval models’ sensitivity to document length by using either only the title or the<br>
full-text of the documents for the retrieval task. We conclude that word centroid similarity is the best<br>
competitor to state-of-the-art retrieval models. It can be further improved by re-weighting the word<br>
frequencies with IDF before aggregating the respective word vectors of the embedding. The proposed<br>
cosine similarity of IDF re-weighted word vectors is competitive to the TF-IDF baseline and even<br>
outperforms it in case of the news domain with a relative percentage of 15%.</p>
https://doi.org/10.5281/zenodo.1143963
oai:zenodo.org:1143963
eng
Zenodo
https://zenodo.org/communities/moving-h2020
https://doi.org/10.5281/zenodo.1143962
info:eu-repo/semantics/openAccess
Creative Commons Attribution 4.0 International
https://creativecommons.org/licenses/by/4.0/legalcode
INFORMATIK 2017, 47. Jahrestagung der Gesellschaft für Informatik, Chemnitz, Germany, September 25-29, 2017
Word embeddings
Document representations
Information Retrieval
Evaluating the Impact of Word Embeddings on Similarity Scoring in Practical Information Retrieval
info:eu-repo/semantics/conferencePaper
oai:zenodo.org:439045
2020-01-20T16:54:22Z
user-moving-h2020
openaire
Thomas Köhler
Ansgar Scherp
Sabrina Herbst
Michael Wiese
Vasileios Mezaris
2016-05-03
<p>MOVING is an EU H2020 project that will enable users from all societal sectors (companies, universities, public administration) to fundamentally improve their information literacy by training how to choose, use and evaluate data mining methods in connection with their daily research tasks and to become data-savvy information professionals. In line with the idea of MOVING, we investigate specifications in relation to user needs in a data-driven online research. To this end, authors deal with the idea of online based research methodology as well as its sectoral specified usage patterns. </p>
https://doi.org/10.5281/zenodo.439045
oai:zenodo.org:439045
Zenodo
https://zenodo.org/communities/moving-h2020
https://doi.org/
info:eu-repo/semantics/openAccess
Creative Commons Attribution 4.0 International
https://creativecommons.org/licenses/by/4.0/legalcode
International Science 2.0 Conference and EEXCESS Final Conference, Cologne, Germany, 3-4 May 2016
MOVING EU H2020, Information literacy
Data driven online research. Potential specifications in relation to user needs
info:eu-repo/semantics/conferencePoster
oai:zenodo.org:809672
2020-01-20T17:36:30Z
user-moving-h2020
user-invid-h2020
user-eu
Markatopoulou, Foteini
Galanopoulos, Damianos
Mezaris, Vasileios
Patras, Ioannis
2017-06-08
<p>This paper presents a fully-automatic method that combines video concept detection and textual query analysis in order to solve the problem of ad-hoc video search. We present a set of NLP steps that cleverly analyse different parts of the query in order to convert it to related semantic concepts, we propose a new method for transforming concept-based keyframe and query representations into a common semantic embedding space, and we show that our proposed combination of concept-based representations with their corresponding semantic embeddings results to improved video search accuracy. Our experiments in the TRECVID AVS 2016 and the Video Search 2008 datasets show the effectiveness of the proposed method compared to other similar approaches.</p>
https://doi.org/10.1145/3078971.3079041
oai:zenodo.org:809672
Zenodo
https://zenodo.org/communities/moving-h2020
https://zenodo.org/communities/invid-h2020
https://zenodo.org/communities/eu
info:eu-repo/semantics/openAccess
Creative Commons Attribution 4.0 International
https://creativecommons.org/licenses/by/4.0/legalcode
ICMR 2017, ACM International Conference on Multimedia Retrieval 2017, Bucharest, Romania, 6-9 June 2017
Video search
Zero-shot learning
Visual analysis
Query and Keyframe Representations for Ad-hoc Video Search
info:eu-repo/semantics/conferencePaper
oai:zenodo.org:187914
2020-01-20T17:13:39Z
user-moving-h2020
openaire
user-eu
Foteini Markatopoulou
Anastasia Moumtzidou
Damianos Galanopoulos
Theodoros Mironidis
Vagia Kaltsa
Anastasia Ioannidou
Spyridon Symeonidis
Konstantinos Avgerinakis
Stelios Andreadis
Ilias Gialampoukidis
Stefanos Vrochidis
Alexia Briassouli
Vasileios Mezaris
Ioannis Kompatsiaris
Ioannis Patras
2016-11-30
<p>This presentation provides an overview of the runs submitted to TRECVID 2016 by ITI-CERTH in the Ad-hoc Video Search (AVS) task. Our AVS task participation is based on a method that combines the linguistic analysis of the query and the concept-based annotation of video fragments.</p>
https://doi.org/10.5281/zenodo.187914
oai:zenodo.org:187914
Zenodo
https://zenodo.org/communities/moving-h2020
https://zenodo.org/communities/eu
https://doi.org/
info:eu-repo/semantics/openAccess
Creative Commons Attribution 4.0 International
https://creativecommons.org/licenses/by/4.0/legalcode
Ad-hoc Video Search (AVS)
TRECVID AVS 2016
Video shot processing
CERTH-ITI
ITI - CERTH in TRECVID 2016 Ad - hoc Video Search (AVS)
info:eu-repo/semantics/lecture
oai:zenodo.org:2547530
2020-01-20T17:42:52Z
user-moving-h2020
openaire
user-eu
Sabine Barthold
Franziska Günther
2019-01-23
<p>This is a talk we held at the 2018 conference <em>Gemeinschaften in den Neuen Medien (GeNeMe) </em>in Dresden.</p>
https://doi.org/10.5281/zenodo.2547530
oai:zenodo.org:2547530
Zenodo
https://zenodo.org/communities/moving-h2020
https://zenodo.org/communities/eu
https://doi.org/10.5281/zenodo.2547529
info:eu-repo/semantics/openAccess
Creative Commons Attribution 4.0 International
https://creativecommons.org/licenses/by/4.0/legalcode
GeNeMe, Gemeinschaften in den Neuen Medien, Dresden, 24-26 October 2018
science 2.0
open science
open research
MOOC
Wissenschaft 2.0 und offene Forschungsmethoden vermitteln. Der MOOC "Science 2.0 and open research methods"
info:eu-repo/semantics/lecture
oai:zenodo.org:4662810
2021-04-06T00:27:17Z
user-moving-h2020
user-eu
Apaolaza, Aitor
Backes, Tobias
Barthold, Sabine
Bienia, Irina
Blume, Till
Collyda, Chrysa
Fessl, Angela
Gottfried, Sebastian
Grunewald, Paul
Günther, Franziska
Köhler, Thomas
Lorenz, Robert
Heinz, Matthias
Herbst, Sabrina
Mezaris, Vasileios
Nishioka, Chifumi
Pournaras, Alexandros
Sabol, Vedran
Saleh, Ahmed
Scherp, Ansgar
Simic, Ilija
Skulimowski, Andrzej
Vagliano, Iacopo
Vigo, Markel
Wiese, Michael
Zdolšek Draksler, Tanja
2021-03-20
<p>Scholars and professionals in various sectors of the economy, including public administrators, corporate compliance officers, and auditors, deal with an ever-increasing flow of information (new scientific publications, business documents and multimedia files, laws, etc.). They need sophisticated tools to evaluate all this information fast and accurately and to visualize the analysis results. Specifically this means that, on the one hand, they need tools that enable state-of-the-art search and semantic analysis of large digital contents, by providing: (i) access to an extensive source inventory, (ii)advanced search and visualization methods, and (iii) functionalities for generating new knowledge from these digital assets. On the other hand, these tools need to be reasonably easy for their users to understand and support them through: (i) a detailed and scientifically proven help system (tutorials, guidance), individually configurable training programmes (learning modules, videos), and a lively community of people that have similar interests or problems to be solved. To face these challenges, the interdisciplinary trans-European project called MOVING (“TraininG towards a society of data-saVvy inforMation prOfessionals to enable open leadership INnovation”) (Vagliano et al. 2018) has built an innovative training platform that enables users from various societal sectors to fundamentally improve their information literacy by training in how to choose, use, and evaluate data mining methods in their daily research and business tasks, and to become data-savvy information professionals.</p>
https://doi.org/10.1007/978-3-030-66262-2_6
oai:zenodo.org:4662810
Springer
https://zenodo.org/communities/moving-h2020
https://zenodo.org/communities/eu
info:eu-repo/semantics/openAccess
Creative Commons Attribution 4.0 International
https://creativecommons.org/licenses/by/4.0/legalcode
MOVING platform
MOVING web application
Recommender system
Adaptive training support
MOVING: A User-Centric Platform for Online Literacy Training and Learning
info:eu-repo/semantics/bookPart
oai:zenodo.org:200498
2020-01-20T17:37:26Z
user-moving-h2020
user-invid-h2020
user-eu
Foteini Markatopoulou
Anastasia Moumtzidou
Damianos Galanopoulos
Theodoros Mironidis
Vagia Kaltsa
Anastasia Ioannidou
Spyridon Symeonidis
Konstantinos Avgerinakis
Stelios Andreadis
Ilias Gialampoukidis
Stefanos Vrochidis
Alexia Briassouli
Vasileios Mezaris
Ioannis Kompatsiaris
Ioannis Patras
2016-11-30
<p>This paper provides an overview of the runs submitted to TRECVID 2016 by ITI-CERTH. ITI-CERTH participated in the Ad-hoc Video Search (AVS), Multimedia Event Detection (MED), Instance Search (INS) and Surveillance Event Detection (SED) tasks. Our AVS task participation is based on a method that combines the linguistic analysis of the query and the concept-based annotation of video fragments. In the MED task, in 000Ex task we exploit the textual description of an event class in order retrieve related videos, without using positive samples. Furthermore, in 010Ex and 100Ex tasks, a kernel sub class version of our discriminant analysis method (KSDA) combined with a fast linear SVM is employed. The INS task is performed by employing VERGE, which is an interactive retrieval application that integrates retrieval functionalities that consider only visual information. For the SED task, we deploy a novel activity detection algorithm that is based on Motion Boundary Activity Areas (MBAA), dense trajectories, Fisher vectors and an overlapping sliding window.</p>
https://doi.org/10.5281/zenodo.200498
oai:zenodo.org:200498
Zenodo
https://zenodo.org/communities/moving-h2020
https://zenodo.org/communities/invid-h2020
https://zenodo.org/communities/eu
https://doi.org/
info:eu-repo/semantics/openAccess
Creative Commons Attribution 4.0 International
https://creativecommons.org/licenses/by/4.0/legalcode
Multimedia Event Detection (MED)
Ad-hoc Video Search (AVS)
Instance Search (INS)
Surveillance Event Detection (SED)
Kernel Kub class Discriminant Analysis method (KSDA)
Motion Boundary Activity Areas (MBAA)
VERGE
ITI-CERTH participation in TRECVID 2016
info:eu-repo/semantics/conferencePaper
oai:zenodo.org:240854
2020-01-20T16:49:48Z
user-moving-h2020
user-invid-h2020
user-eu
Moumtzidou, Anastasia
Mironidis, Theodoros
Markatopoulou, Fotini
Andreadis, Stelios
Gialampoukidis, Ilias
Galanopoulos, Damianos
Ioannidou, Anastasia
Vrochidis, Stefanos
Mezaris, Vasileios
Kompatsiaris, Ioannis
Patras, Ioannis
2016-12-31
<p>This paper presents VERGE interactive video retrieval engine, which is capable of browsing and searching into video content. The system integrates several content-based analysis and retrieval modules including concept detec-tion, clustering, visual similarity search, object-based search, query analysis and multimodal and temporal fusion.</p>
https://doi.org/10.1007/978-3-319-51814-5_46
oai:zenodo.org:240854
Zenodo
https://zenodo.org/communities/moving-h2020
https://zenodo.org/communities/invid-h2020
https://zenodo.org/communities/eu
info:eu-repo/semantics/openAccess
Creative Commons Attribution 4.0 International
https://creativecommons.org/licenses/by/4.0/legalcode
VERGE IN VBS 2017
info:eu-repo/semantics/conferencePaper
oai:zenodo.org:1167684
2020-01-20T16:22:14Z
user-moving-h2020
user-eu
Aitor Apaolaza
Markel Vigo
2017-06-30
<p>Remotely stored user interaction logs, which give access to a wealth of data generated by large numbers of users, have been long used to understand if interactive systems meet the expectations of designers. Unfortunately, detailed insight into users' interaction behaviour still requires a high degree of expertise and domain specific knowledge. We present WevQuery, a scalable system to query user interaction logs in order to allow designers to test their hypotheses about users' behaviour. WevQuery supports this purpose using a graphical notation to define the interaction patterns designers are seeking. WevQuery is scalable as the queries can then be executed against large user interaction datasets by employing the MapReduce paradigm. This way WevQuery provides designers effortless access to harvest users' interaction patterns, removing the burden of low-level interaction data analysis. We present two scenarios to showcase the potential of WevQuery, from the design of the queries to their execution on real interaction data accounting for 5.7m events generated by 2,445 unique users.</p>
https://doi.org/10.1145/3095806
oai:zenodo.org:1167684
eng
Zenodo
https://zenodo.org/communities/moving-h2020
https://zenodo.org/communities/eu
info:eu-repo/semantics/openAccess
Creative Commons Attribution 4.0 International
https://creativecommons.org/licenses/by/4.0/legalcode
Proceedings of the ACM on Human-Computer Interaction, 1(EICS), (2017-06-30)
Hypothesis Testing
A/B Testing
User Interface Evaluation
Usability
Web
WevQuery: Testing Hypotheses about Web Interaction Patterns
info:eu-repo/semantics/article