6382090
doi
10.5281/zenodo.6382090
oai:zenodo.org:6382090
user-piai_hiig
Richters, Christopher
Alexander von Humboldt Institute for Internet and Society
Nenno, Sami
Alexander von Humboldt Institute for Internet and Society
Tannert, Benjamin
Hochschule Bremen
Schöning, Johannes
University of St. Gallen
Züger, Theresa
Alexander von Humboldt Institute for Internet and Society
Image Dataset of Accessibility Barriers
Stolberg, Jakob
Alexander von Humboldt Institute for Internet and Society
info:eu-repo/semantics/openAccess
Creative Commons Attribution 4.0 International
https://creativecommons.org/licenses/by/4.0/legalcode
Accessibility
Object Detection
Computer Vision
Dataset
<p><strong>The Data</strong><br>
The dataset consist of 5538 images of public spaces, annotated with steps, stairs, ramps and grab bars for stairs and ramps. The dataset has annotations 3564 of steps, 1492 of stairs, 143 of ramps and 922 of grab bars.</p>
<p>Each step annotation is attributed with an estimate of the height of the step, as falling into one of three categories: less than 3cm, 3cm to 7cm or more than 7cm. Additionally it is attributed with a 'type', with the possibilities 'doorstep', 'curb' or 'other'.</p>
<p>Stair annotations are attributed with the number of steps in the stair.</p>
<p>Ramps are attributed with an estimate of their width, also falling into three categories: less than 50cm, 50cm to 100cm and more than 100cm.</p>
<p>In order to preserve all additional attributes of the labels, the data is published in the CVAT XML format for images.</p>
<p> </p>
<p><strong>Annotating Process</strong><br>
The labelling has been done using bounding boxes around the objects. This format is compatible with many popular object detection models, e.g. the YOLO object model. A bounding box is placed so it contains exactly <em>the visible part</em> of the respective objects. This implies that only objects that are visible in the photo are annotated. This means in particular a photo of a stair or step from above, where the object cannot be seen, have not been annotated, even when a human viewer can possibly infer that there is a stair or a step from other features in the photo.</p>
<p><strong>Steps</strong><br>
A step is annotated, when there is an vertical increment that functions as a passage between two surface areas intended human or vehicle traffic. This means that we have not included:</p>
<ul>
<li>Increments that are to high to reasonably be considered at passage.</li>
<li>Increments that does not lead to a surface intended for human or vehicle traffic, e.g. a 'step' in front of a wall or a curb in front of a bush.</li>
</ul>
<p>In particular, the bounding box of a step object contains exactly the incremental part of the step, but does not extend into the top or bottom horizontal surface any more than necessary to enclose entirely the incremental part. This has been chosen for consistency reasons, as including parts of the horizontal surfaces would imply a non-trivial choice of how much to include, which we deemed would most likely lead to more inconstistent annotations.</p>
<p>The height of the steps are estimated by the annotators, and are therefore not guarranteed to be accurate.</p>
<p>The type of the steps typically fall into the category 'doorstep' or 'curb'. Steps that are in a doorway, entrance or likewise are attributed as doorsteps. We also include in this category steps that are immediately leading to a doorway within a proximity of 1-2m. Steps between different types of pathways, e.g. between streets and sidewalks, are annotated as curbs. Any other type of step are annotated with 'other'. Many of the 'other' steps are for example steps to terraces.</p>
<p><strong>Stairs</strong><br>
The stair label is used whenever two or more steps directly follow each other in a consistent pattern. All vertical increments are enclosed in the bounding box, as well as intermediate surfaces of the steps. However the top and bottom surface is not included more than necessary for the same reason as for steps, as described in the previous section.</p>
<p>The annotator counts the number of steps, and attribute this to the stair object label.</p>
<p><strong>Ramps</strong><br>
Ramps have been annotated when a sloped passage way has been placed or built to connect two surface areas intended for human or vehicle traffic. This implies the same considerations as with steps. Alike also only the sloped part of a ramp is annotated, not including the bottom or top surface area.</p>
<p>For each ramp, the annotator makes an assessment of the width of the ramp in three categories: less than 50cm, 50cm to 100cm and more than 100cm. This parameter is visually hard to assess, and sometimes impossible due to the view of the ramp.</p>
<p><strong>Grab Bars</strong><br>
Grab bars are annotated for hand rails and similar that are in direct connection to a stair or a ramp. While horizontal grab bars could also have been included, this was omitted due to the implied ambiguities of fences and similar objects. As the grab bar was originally intended as an attributal information to stairs and ramps, we chose to keep this focus. The bounding box encloses the part of the grab bar that functions as a hand rail for the stair or ramp.</p>
<p> </p>
<p><strong>Usage</strong><br>
As is often the case when annotating data, much information depends on the subjective assessment of the annotator. As each data point in this dataset has been annotated only by one person, caution should be taken if the data is applied.</p>
<p>Generally speaking, the mindset and usage guiding the annotations have been wheelchair accessibility. While we have strived to annotate at an object level, hopefully making the data more widely applicable than this, we state this explicitly as it may have swayed untrivial annotation choices.</p>
<p>The attributal data, such as step height or ramp width are highly subjective estimations. We still provide this data to give a post-hoc method to adjust which annotations to use. E.g. for some purposes, one may be interested in detecting only steps that are indeed more than 3cm. The attributal data makes it possible to sort away the steps less than 3cm, so a machine learning algorithm can be trained on this more appropriate dataset for that use case. We stress however, that one cannot expect to train accurate machine learning algorithms inferring the attributal data, as this is not accurate data in the first place.</p>
<p>We hope this dataset will be a useful building block in the endeavours for automating barrier detection and documentation.</p>
Zenodo
2022-03-24
info:eu-repo/semantics/other
6382089
user-piai_hiig
1648195857.094962
7972407510
md5:d96051ca1339d1c1afa9ac736eabd058
https://zenodo.org/records/6382090/files/wm_barriers_data.zip
public
10.5281/zenodo.6382089
isVersionOf
doi