Published September 20, 2023 | Version 1.0.0
Dataset Open

Real testing sets for Visual Affordance Segmentation of hand-occluded objects

  • 1. ROR icon University of Genoa
  • 2. ROR icon Queen Mary University of London
  • 3. ROR icon Idiap Research Institute
  • 4. ROR icon École Polytechnique Fédérale de Lausanne

Description

[arXiv] [webpage] [code] [trained model][mixed-reality data]

RGB images with the corresponding affordance annotation to test affordance segmentation models. Images are selected from two datasets for hand-object pose estimation: HO-3D and CCM.

For HO3D we selected 150 frames from the dataset and enriched the annotation of the hand and object segmentation masks with new annotations specific for the affordance segmentation problem.

For CCM we selected 150 frames from the dataset and created the annotation specific for the affordance segmentation problem. The forearms and hands in contact with the offered container are annotated. 

File names are formatted as: <videoname>_<framenumber>.png

Segmentation classes values:

  •  0: background
  •  1: graspable
  •  2: contain
  •  3: arm

 

References. 

Affordance segmentation of hand-occluded containers from exocentric images
T. Apicella, A. Xompero, E. Ragusa, R. Berta, A. Cavallaro, P. Gastaldo
IEEE/CVF International Conference on Computer Vision Workshops (ICCVW), 2023

@inproceedings{apicella2023affordance,
  title={Affordance segmentation of hand-occluded containers from exocentric images},
  author={Apicella, Tommaso and Xompero, Alessio and Ragusa, Edoardo and Berta, Riccardo and Cavallaro, Andrea and Gastaldo, Paolo},
  booktitle={IEEE/CVF International Conference on Computer Vision Workshops (ICCVW)},
  year={2023},
}

HOnnotate: A method for 3D Annotation of Hand and Objects Poses
S. Hampali, M. Rad, M. Oberweger, V. Lepetit
IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2020

@inproceedings{hampali2020honnotate,
  title={Honnotate: A method for 3d annotation of hand and object poses},
  author={Hampali, Shreyas and Rad, Mahdi and Oberweger, Markus and Lepetit, Vincent},
  booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
  pages={3196--3206},
  year={2020}
}

CORSMAL Containers Manipulation (1.0) [Data set]
A. Xompero, R. Sanchez-Matilla, R. Mazzon, and A. Cavallaro
Queen Mary University of London. https://doi.org/10.17636/101CORSMAL1

 

License. Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International (CC BY-NC-SA 4.0)

Enquiries, Question and Comments. For enquiries, questions, or comments, please contact Tommaso Apicella.

Files

CCM_affordance.zip

Files (169.9 MB)

Name Size Download all
md5:131daf9116b96d15f68c90f0007f59f4
111.6 MB Preview Download
md5:17eee1eec58b250fee33f4aace9273d6
58.3 MB Preview Download

Additional details

Related works

Is supplement to
Preprint: 10.48550/arXiv.2308.11233 (DOI)