Journal article Open Access
Image annotation is the process of assigning metadata to images, allowing effective retrieval by text-based search techniques. Despite the lots of eorts in automatic multimedia analysis, automatic semantic annotation of multimedia is still inefficient due to the problems in modelling high level semantic terms. In this paper we examine the factors affecting the quality of annotations collected through crowdsourcing platforms. An image dataset was manually annotated utilizing: (i) a vocabulary consists of pre-selected set of keywords,(ii) an hierarchical vocabulary, and (iii) free keywords. The results show that the annotation quality is affected by the image content itself and the used lexicon. As we expected while annotation using the hierarchical vocabulary is more representative, the use of free keywords leads to increased invalid annotation. Finally it is shown that images requiring annotations that are not directly
related to their content (i.e. annotation using abstract concepts), lead to accrue annotator inconsistency revealing in that way the diculty in annotating such kind of images is not limited to automatic annotation, but it is generic problem of annotation.