UPDATE: Zenodo migration postponed to Oct 13 from 06:00-08:00 UTC. Read the announcement.

Dataset Open Access

Extension of the Action Verb Corpus (AVCext)

Matthias Hirschmanner; Stephanie Gross; Brigitte Krenn; Friedrich Neubarth; Martin Trapp; Michael Zillich; Markus Vincze

Extension of the Action Verb Corpus (AVCext)

The extension to the Action Verb Corpus consists of 41 recordings conducted by 2 users experienced with the system performing the same three actions as in AVC — take (208 instances), put (208 instances), and push (91 instances). The actions were performed without any instructions. The focus of the extension is to facilitate action recognition. In comparison to AVC, no speech-related information is annotated in the extension dataset. The other ELAN annotations available with AVC are also available for the extension dataset. Additionally, the actions are annotated in two degrees of granularity. Coarse labels are: take, put and push. Fine labels split the motion into more granular motion primitives: reach, grab, moveObject, and place. These annotations are available as eaf (ELAN) files, csv files and as two separate columns in the Merged files. Details about the collected data can be found in Matthias Hirschmanner, Stephanie Gross, Brigitte Krenn, Friedrich Neubarth, Martin Trapp and Markus Vincze: Extension of the Action Verb Corpus for Supervised Learning. ARW 2018.

The dataset consists of the following information:

  • the merged output of the hand and object trackers (AVCExtension_Merged.zip, one csv file per episode/recording)
    • HandID: 0 right, 1 left
    • FingerID: 0 thumb, 1 index, 2 middle, 3 ring, 4 pinky
    • BoneID: 0 metacarpal, 1 proximal, 2 intermediate, 3 distal
  • the output of the object trackers, including the object poses and their reliability estimate calculated by the object tracker, whether an object is touched by or is in the hand of the instructor and whether the object touches the table, for the coordinate system applied see picture "coord_system.png (AVCExtension_Objects.zip, one csv file per episode/recording),
  • the videos from Leap Motion showing the hand movements and objects
    (AVCExtension_video_libm.zip, one avi file per episode/recording),
  • animation of the merged hand and object tracking
    (AVCExtension_video_schematic.zip, one avi file per episode/recording),
  • the following list of annotations synchronized with the real-time animation of the hand and object tracking (available as ELAN files AVCExtension_Annotations_eaf.zip, and csv files AVCExtension_Annotations_csv.zip, one file per episode/recording)
    • information which object is currently moved, and where it is moved to (automatically annotated),
    • information whether a hand touches a particular object (manually annotated),
    • information whether a particular object touches the ground/table (automatically annotated),
    • coarse-grained annotation: take, put, push (manually annotated),
    • fine-grained annotation: reach, grab, moveObject, and place (manually annotated),
    • position of the objects in the scene (automatically calculated from output of object tracker)


Corpus creation and annotation was supported by the WWTF project RALLI. The dataset was recorded at ACIN, TUW.

Files (403.4 MB)
Name Size
45.8 kB Download
125.6 kB Download
112.6 MB Download
2.7 MB Download
186.9 MB Download
100.8 MB Download
207.5 kB Download
All versions This version
Views 9191
Downloads 4242
Data volume 1.7 GB1.7 GB
Unique views 7676
Unique downloads 1414


Cite as