There is a newer version of the record available.

Published July 31, 2012 | Version v1
Dataset Open

ImageCLEF 2012 Image annotation and retrieval dataset (MIRFLICKR)

  • 1. Yahoo! Research
  • 2. CEA LIST

Description

DESCRIPTION
For this task, we use a subset of the MIRFLICKR (http://mirflickr.liacs.nl) collection. The entire collection contains 1 million images from the social photo sharing website Flickr and was formed by downloading up to a thousand photos per day that were deemed to be the most interesting according to Flickr. All photos in this collection were released by their users under a Creative Commons license, allowing them to be freely used for research purposes. Of the entire collection, 25 thousand images were manually annotated with a limited number of concepts and many of these annotations have been further refined and expanded over the lifetime of the ImageCLEF photo annotation task. This year we used crowd sourcing to annotate all of these 25 thousand images with the concepts.

On this page we provide you with more information about the textual features, visual features and concept features we supply with each image in the collection we use for this year's task.


TEXTUAL FEATURES
All images are accompanied by the following textual features:

- Flickr user tags
These are the tags that the users assigned to the photos their uploaded to Flickr. The 'raw' tags are the original tags, while the 'clean' tags are those collapsed to lowercase and condensed to removed spaces.

- EXIF metadata
If available, the EXIF metadata contains information about the camera that took the photo and the parameters used. The 'raw' exif is the original camera data, while the 'clean' exif reduces the verbosity.

- User information and Creative Commons license information
This contains information about the user that took the photo and the license associated with it.


VISUAL FEATURES
Over the previous years of the photo annotation task we noticed that often the same types of visual features are used by the participants, in particular features based on interest points and bag-of-words are popular. To assist you we have extracted several features for you that you may want to use, so you can focus on the concept detection instead. We additionally give you some pointers to easy to use toolkits that will help you extract other features or the same features but with different default settings.

- SIFT, C-SIFT, RGB-SIFT, OPPONENT-SIFT
We used the ISIS Color Descriptors (http://www.colordescriptors.com) toolkit to extract these descriptors. This package provides you with many different types of features based on interest points, mostly using SIFT. It furthermore assists you with building codebooks for bag-of-words. The toolkit is available for Windows, Linux and Mac OS X.

- SURF
We used the OpenSURF (http://www.chrisevansdev.com/computer-vision-opensurf.html) toolkit to extract this descriptor. The open source code is available in C++, C#, Java and many more languages.

- TOP-SURF
We used the TOP-SURF (http://press.liacs.nl/researchdownloads/topsurf) toolkit to extract this descriptor, which represents images with SURF-based bag-of-words. The website provides codebooks of several different sizes that were created using a combination of images from the MIR-FLICKR collection and from the internet. The toolkit also offers the ability to create custom codebooks from your own image collection. The code is open source, written in C++ and available for Windows, Linux and Mac OS X.

- GIST
We used the LabelMe (http://labelme.csail.mit.edu) toolkit to extract this descriptor. The MATLAB-based library offers a comprehensive set of tools for annotating images.

For the interest point-based features above we used a Fast Hessian-based technique to detect the interest points in each image. This detector is built into the OpenSURF library. In comparison with the Hessian-Laplace technique built into the ColorDescriptors toolkit it detects fewer points, resulting in a considerably reduced memory footprint. We therefore also provide you with the interest point locations in each image that the Fast Hessian-based technique detected, so when you would like to recalculate some features you can use them as a starting point for the extraction. The ColorDescriptors toolkit for instance accepts these locations as a separate parameter. Please go to http://www.imageclef.org/2012/photo-flickr/descriptors for more information on the file format of the visual features and how you can extract them yourself if you want to change the default settings.


CONCEPT FEATURES
We have solicited the help of workers on the Amazon Mechanical Turk platform to perform the concept annotation for us. To ensure a high standard of annotation we used the CrowdFlower platform that acts as a quality control layer by removing the judgments of workers that fail to annotate properly. We reused several concepts of last year's task and for most of these we annotated the remaining photos of the MIRFLICKR-25K collection that had not yet been used before in the previous task; for some concepts we reannotated all 25,000 images to boost their quality. For the new concepts we naturally had to annotate all of the images.

- Concepts
For each concept we indicate in which images it is present. The 'raw' concepts contain the judgments of all annotators for each image, where a '1' means an annotator indicated the concept was present whereas a '0' means the concept was not present, while the 'clean' concepts only contain the images for which the majority of annotators indicated the concept was present. Some images in the raw data for which we reused last year's annotations only have one judgment for a concept, whereas the other images have between three and five judgments; the single judgment does not mean only one annotator looked at it, as it is the result of a majority vote amongst last year's annotators.

- Annotations
For each image we indicate which concepts are present, so this is the reverse version of the data above. The 'raw' annotations contain the average agreement of the annotators on the presence of each concept, while the 'clean' annotations only include those for which there was a majority agreement amongst the annotators.

You will notice that the annotations are not perfect. Especially when the concepts are more subjective or abstract, the annotators tend to disagree more with each other. The raw versions of the concept annotations should help you get an understanding of the exact judgments given by the annotators.

Files

groundtruth.zip

Files (9.8 GB)

Name Size Download all
md5:0851505fdbc5ed04f5924ab449d7d3ba
230.8 MB Preview Download
md5:c045bff52bd6020edf9066cb45a06262
6.4 kB Preview Download
md5:e942917ceb49f1bcb922481f8c0060b2
343.2 kB Preview Download
md5:ec89f6ad4db06fd4c25d2e57bfcf6e1f
389.9 MB Preview Download
md5:dd8b6f5f14953690b5db189be5baa548
33.3 MB Preview Download
md5:e6986cea8b15255707448a25bcb30ee8
730.2 MB Preview Download
md5:faa02106df84fa0c59209913630cfa37
7.4 MB Preview Download
md5:013f5036942f4d4437ea69b01f588bf7
19.4 MB Preview Download
md5:389254201a98c4d15185eca38789defc
5.4 MB Preview Download
md5:b6ee1dc3bf14000df35ec17ebed7127e
704.4 MB Preview Download
md5:c242a7fd61015213694729670971c029
683.9 MB Preview Download
md5:4cfc015aa246ace1e069a815523fce9e
262.7 MB Preview Download
md5:8d00491305c6b09b6c27bae98785eba3
406.3 MB Preview Download
md5:72a281ca7c975c9a89fd1364c61fc333
20.6 MB Preview Download
md5:a098f30dbffeee2a3b5431d48e048ec0
43.2 MB Preview Download
md5:6c5b50077e23042fc21cd6876e553338
24.5 MB Preview Download
md5:0918b1d77cb371b152dec2a8acc86324
19.7 MB Preview Download
md5:693b30634a03994ce00088e18a14d598
50.0 MB Preview Download
md5:090bfb01032acd47fd903c383e78e880
1.1 GB Preview Download
md5:5f3bbc7737acf27251f81e72d29e66f7
11.2 MB Preview Download
md5:773ce21f5dcdbf2198f9417199b117d7
29.2 MB Preview Download
md5:05bc72b8f472a449f8581444760c3c10
8.1 MB Preview Download
md5:2174afa57cedf72bd393b70e738a4305
1.1 GB Preview Download
md5:725e7044f31561a5a6daa7d0f0703df9
1.0 GB Preview Download
md5:0c64501e9a3b82131f40803a5163a3ca
394.4 MB Preview Download
md5:603fc8b53af0a984f43edc62bc3ab92d
610.8 MB Preview Download
md5:bd1cf1a96cdec2cbf852e2dcf1c5bf98
31.0 MB Preview Download
md5:ca2e79275227a60cb0a9d06d0be98941
1.8 GB Preview Download
md5:4eb16c04dce0a7fc0abef8d1ca4dbb3f
36.4 MB Preview Download
md5:99c3dad5b49cc67dc1435ffb30e7747d
29.6 MB Preview Download

Additional details