Published July 1, 2016 | Version FINAL
Video/Audio Open

CVD2014 - A database for evaluating no-reference video quality assessment algorithms

  • 1. University of Helsinki
  • 2. Microsoft Corporation
  • 3. Aalto University

Description

The CVD video database is developed to provide an useful tool for researchers in the validation and developing processes of no-reference (NR) objective video quality assessment (VQA) algorithms. It consists of 234 videos from five different scenes captured by 78 different cameras (mobile phones, compact camera, video camera, SLR). The subjective experiments are conducted following the Single-Stimulus (SS) procedure to collect ratings of video quality.

Setup

We implement our experiments according to the Single Stimulus methodology using VQone MATLAB toolboxon high quality monitors (Eizo ColorEdge CG241W) with 1920x1200 pixel resolution in a dark room (ambient light < 20 lux). Video stimuli were displayed at their original size of VGA (640 x 480) or HD (1280 x 720). The subjects viewing distance (80 cm) was controlled by a string hanging from a ceiling and they were instructed to keep their head steady next to it. The monitors were calibrated to according to sRGB (target values were: 6500 K, 80 lux, and gamma 2.2) using EyeOne Pro calibrator (X-rite co.). The laboratory setup is showed in the figure below.

Subjects

Subjects (n = 30, 30, 28, 33, 30, 32 and 27 for Tests 1 - 7 respectively) were naïve in a sense that they did not study or work with image quality or related fields. They were recruited through student mailing lists consisting mainly humanities and behavioral science students. Subjects’ vision was controlled for the near visual acuity, near contrast vision (near F.A.C.T.) and color vision (Farnsworth D15) before the participation. They received movie tickets as a reward.

Procedure

Subjects evaluated one video sample at a time and all video samples of one scene were presented in a row. The order of video samples and scenes was randomized. Subjects had the option to view video samples again as many times as they wanted.

Data

The results are processed and reported in the form of Mean Opinion Score (MOS) for the tested video samples. In addition, we provide the whole raw data from the subjective experiments instead of just pre-calculated mean opinion scores from each video sample. This allows further analyses to be made by those who wish to use this database and gives them better opportunity to utilize the data to its full potential.

Realignment study (test 7) contains the data from the additional study in which the mappings from the test and scene specific quality scales (test 1-6) to the global quality scale were formed. The global scale is valuable when studying and developing VQA algorithms. With the global scale, all of the samples (234 video samples in the case of the CVD2014) are in the same scale, and the performance analysis for algorithms can be conducted with a high number of samples.

If you use this database in your research, we kindly ask that you follow The Copyright notice below and cite the following paper:

 

M. Nuutinen, T. Virtanen, M. Vaahteranoksa, T. Vuori, P. Oittinen and J. Häkkinen, "CVD2014—A Database for Evaluating No-Reference Video Quality Assessment Algorithms," in IEEE Transactions on Image Processing, vol. 25, no. 7, pp. 3073-3086, July 2016. doi: 10.1109/TIP.2016.2562513

 

 

-----------COPYRIGHT NOTICE STARTS WITH THIS LINE------------

Copyright (c) 2014 The University of Helsinki
All rights reserved.

Permission is hereby granted, without written agreement and without license or royalty fees, to use, copy, modify, and distribute this database (the videos, the images, the results and the source files) and its documentation for any purpose, provided that the copyright notice in its entirely appear in all copies of this database, and the original source of this database,Visual Cognition research group (www.helsinki.fi/psychology/groups/visualcognition/index.htm) and the Institute of Behavioral Science (www.helsinki.fi/ibs/index.html) at the University of Helsinki (www.helsinki.fi/university/), is acknowledged in any publication that reports research using this database. Individual videos and images may not be used outside the scope of this database (e.g. in marketing purposes) without prior permission.

The database and our paper are to be cited in the bibliography as: M. Nuutinen, T. Virtanen, M. Vaahteranoksa, T. Vuori, P. Oittinen and J. Häkkinen, "CVD2014—A Database for Evaluating No-Reference Video Quality Assessment Algorithms," in IEEE Transactions on Image Processing, vol. 25, no. 7, pp. 3073-3086, July 2016.
doi: 10.1109/TIP.2016.2562513

-----------------------------------------------------------------------------

LIMITATION OF LIABILITY

UNIVERSITY OF HELSINKI SHALL IN NO CASE BE LIABLE IN CONTRACT, TORT OR OTHERWISE FOR ANY LOSS OF REVENUE, PROFIT, BUSINESS OR GOODWILL OR ANY DIRECT, INDIRECT, SPECIAL, CONSEQUENTIAL, INCIDENTAL OR PUNITIVE COST, DAMAGES OR EXPENSE OF ANY KIND HOWEVER CAUSED OR HOWEVER ARISING UNDER OR IN CONNECTION WITH THE USE OF THIS DATABASE.

THE UNIVERSITY OF HELSINKI SPECIFICALLY DISCLAIMS ANY WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE. THE DATABASE PROVIDED HEREUNDER IS ON AN "AS IS" BASIS, AND THE UNIVERSITY OF HELSINKI HAS NO OBLIGATION TO PROVIDE MAINTENANCE, SUPPORT, UPDATES, ENHANCEMENTS, OR MODIFICATIONS.

THIS AGREEMENT SHALL BE CONSTRUED AND INTERPRETED IN ACCORDANCE WITH THE LAWS OF FINLAND, EXCLUDING ITS RULES FOR CHOICE OF LAW.

-----------COPYRIGHT NOTICE ENDS WITH THIS LINE------------

Notes

Our research group home page: http://www.helsinki.fi/psychology/groups/visualcognition/

Files

CVD2014_preprint.pdf

Files (45.1 GB)

Name Size Download all
md5:db87e3f8072f860cf4bcd3fad7d403b0
6.0 GB Download
md5:15a534dfff3be48bb3943e27ec3598dc
16.1 GB Download
md5:a5320242673df2f127797ba393690d99
23.0 GB Download
md5:90c544074381cd3d4655fc94a2f82743
7.7 MB Preview Download
md5:904ec1f7bba1015ebc0a8df9514b703f
79.7 kB Download

Additional details

Related works

Is supplement to
10.1109/TIP.2016.2562513 (DOI)