There is a newer version of this record available.

Dataset Open Access

# Subjective human thresholds over computer generated images

Jérôme Buisine; Samuel Delepoulle; Rémi Synave; Christophe Renaud

### DataCite XML Export

<?xml version='1.0' encoding='utf-8'?>
<identifier identifierType="DOI">10.5281/zenodo.4531460</identifier>
<creators>
<creator>
<creatorName>Jérôme Buisine</creatorName>
<nameIdentifier nameIdentifierScheme="ORCID" schemeURI="http://orcid.org/">0000-0001-6071-744X</nameIdentifier>
<affiliation>LISIC</affiliation>
</creator>
<creator>
<creatorName>Samuel Delepoulle</creatorName>
<nameIdentifier nameIdentifierScheme="ORCID" schemeURI="http://orcid.org/">0000-0002-8897-0858</nameIdentifier>
<affiliation>LISIC</affiliation>
</creator>
<creator>
<creatorName>Rémi Synave</creatorName>
<nameIdentifier nameIdentifierScheme="ORCID" schemeURI="http://orcid.org/">0000-0003-4907-8813</nameIdentifier>
<affiliation>LISIC</affiliation>
</creator>
<creator>
<creatorName>Christophe Renaud</creatorName>
<nameIdentifier nameIdentifierScheme="ORCID" schemeURI="http://orcid.org/">0000-0002-8350-8824</nameIdentifier>
<affiliation>LISIC</affiliation>
</creator>
</creators>
<titles>
<title>Subjective human thresholds over computer generated images</title>
</titles>
<publisher>Zenodo</publisher>
<publicationYear>2021</publicationYear>
<subjects>
<subject>Computer graphics</subject>
<subject>Synthesis Images</subject>
<subject>Human perception</subject>
<subject>Noise perception</subject>
<subject>Subjective thresholds</subject>
</subjects>
<contributors>
<contributor contributorType="Supervisor">
<contributorName>Christophe Renaud</contributorName>
<nameIdentifier nameIdentifierScheme="ORCID" schemeURI="http://orcid.org/">0000-0002-8350-8824</nameIdentifier>
<affiliation>LISIC</affiliation>
</contributor>
<contributor contributorType="Supervisor">
<contributorName>Samuel Delepoulle</contributorName>
<nameIdentifier nameIdentifierScheme="ORCID" schemeURI="http://orcid.org/">0000-0002-8897-0858</nameIdentifier>
<affiliation>LISIC</affiliation>
</contributor>
</contributors>
<dates>
<date dateType="Issued">2021-02-22</date>
</dates>
<language>en</language>
<resourceType resourceTypeGeneral="Dataset"/>
<alternateIdentifiers>
<alternateIdentifier alternateIdentifierType="url">https://zenodo.org/record/4531460</alternateIdentifier>
</alternateIdentifiers>
<relatedIdentifiers>
<relatedIdentifier relatedIdentifierType="DOI" relationType="IsSupplementTo" resourceTypeGeneral="JournalArticle">10.3390/e23010075</relatedIdentifier>
<relatedIdentifier relatedIdentifierType="DOI" relationType="IsVersionOf">10.5281/zenodo.4531459</relatedIdentifier>
</relatedIdentifiers>
<version>v1.0.0</version>
<rightsList>
<rights rightsURI="info:eu-repo/semantics/openAccess">Open Access</rights>
</rightsList>
<descriptions>
<description descriptionType="Abstract">&lt;p&gt;Realistic image computation mimics the natural process of acquiring pictures by simulating the physical interactions of light between all the objects, lights and cameras lying within a modelled 3D scene. This process is known as global illumination and was formalised by Kajiya with the following rendering Equation:&lt;br&gt;
&lt;span class="math-tex"&gt;$$$$\label{eq:rendering_equation} L_o(x, \omega_o) = {L_e(x, \omega_o)} + \int_{\Omega}^{} {L_i(x, \omega_i)} \cdot f_r(x, \omega_i \rightarrow \omega_o) \cdot \cos \theta_i d\omega_i$$$$&lt;/span&gt;&lt;/p&gt;

&lt;p&gt;where:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&amp;nbsp;&lt;span class="math-tex"&gt;$$L_o(x, \omega_o)$$&lt;/span&gt; is the luminance traveling from point&amp;nbsp;&lt;span class="math-tex"&gt;$$x$$&lt;/span&gt; in direction &lt;span class="math-tex"&gt;$$\omega_o$$&lt;/span&gt;;&lt;/li&gt;
&lt;li&gt;&lt;span class="math-tex"&gt;$$L_e(x, \omega_o)$$&lt;/span&gt; is point&amp;nbsp;&lt;span class="math-tex"&gt;$$x$$&lt;/span&gt; emitted luminance (it is null if point x does not lie on a ligth source surface);&lt;/li&gt;
&lt;li&gt;the integral represents the set of luminances &lt;span class="math-tex"&gt;$$L_i$$&lt;/span&gt;incident in &lt;span class="math-tex"&gt;$$x$$&lt;/span&gt; from the hemisphere of the directions &lt;span class="math-tex"&gt;$$\Omega$$&lt;/span&gt; and reflected in the direction &lt;span class="math-tex"&gt;$$\omega_o$$&lt;/span&gt;. The reflected luminances are weighted by the materials reflecting properties (bidirectionnal reflectance function &lt;span class="math-tex"&gt;$$f_r(x, \omega_i \rightarrow \omega_o)$$&lt;/span&gt;) and the cosinus of the incident angle.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This equation cannot be analytically solved and Monte Carlo approaches are generally used to estimate the value of the pixels of the final image.&lt;/p&gt;

&lt;p&gt;This proposed dataset is composed of 80 points of view of photo realistics images with different level of samples (following the Monte Carlo approach) for each. Each image is 800 x 800 pixels in size. The most noisy image is of 20 samples and the reference one (the most converged image obtained) is of 10000 samples. The &lt;a href="https://www.pbrt.org/index.html"&gt;pbrt&lt;/a&gt; rendering engine (version 3) was used to generate these images.&lt;/p&gt;

&lt;p&gt;By exploiting these levels of samples obtained and therefore of noise perceptible in the images, average subjective human thresholds were collected. For this purpose, the images were divided into 16 areas of 200 x 200 pixels in size for each point of view.&lt;/p&gt;

&lt;p&gt;The proposed image database is composed of the following files:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;human-thresholds.csv&lt;/strong&gt; : the set of human subjective thresholds obtained on 40 points of view. A line is composed of the name of the point of view followed by all the thresholds obtained for each of the 16 zones;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;SIN3D_dataset.tar.gz&lt;/strong&gt; : is an archive containing all the images from 20 to 10000 samples in steps of 20 samples for each point of view (i.e. 500 images per point of view). Each folder in the archive corresponds to a point of view.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;em&gt;This image database has been exploited in order to propose an objective model for noise detection in photo-realistic computer-generated images (article referenced to this image database).&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Funding:&lt;/strong&gt; This research was funded by ANR support: project ANR-17-CE38-0009.&lt;/p&gt;

&lt;p&gt;&amp;nbsp;&lt;/p&gt;</description>
<description descriptionType="Other">{"references": ["Buisine, J\u00e9r\u00f4me, et al. \"Stopping Criterion during Rendering of Computer-Generated Images Based on SVD-Entropy.\" Entropy 23.1 (2021): 75."]}</description>
</descriptions>
</resource>

134
385
views