Dataset Open Access

Visuo-motor dataset recorded from a micro-farming robot

Guido Schillaci; Antonio Pico Villalpando

Citation Style Language JSON Export

  "publisher": "Zenodo", 
  "DOI": "10.5281/zenodo.3552827", 
  "language": "eng", 
  "title": "Visuo-motor dataset recorded from a micro-farming robot", 
  "issued": {
    "date-parts": [
  "abstract": "<p>This is the accompanying dataset of the paper [1]&nbsp;describing algorithms for intrinsic motivation and&nbsp;episodic memory on the Sony LettuceThink microfarming robot.</p>\n\n<p>The LettuceThink microfarming robot developed by Sony Computer Science Laboratories consists of an aluminium frame with an X-Carve CNC machine mounted on it. The CNC machine is used to provide 3-axes movements to a depth camera (Sony DepthSense) mounted at the tip of the vertical z-axis (the end-effector camera). In the experiments presented in the&nbsp;paper, the end-effector camera is facing top-down and only two motors are used (x and y).</p>\n\n<p>A simulator of the LettuceThink robot has been developed to ease the testing of different configurations of the learning system. The simulator generates sensorimotor data from requested trajectories of the end-effector camera. Knowing the initial position of the CNC machine and the target position, the simulator linearly interpolates the trajectory and returns the intermediate positions of the camera together with the images captured from each specific position. The sensorimotor data returned by the simulator have been prerecorded by performing a full scan of the (x,y) plane of the CNC machine using a resolution of 5mm. This resulted in 24,964 images, each mapped to an (x,y) position of the CNC machine. The dataset published here contains these images.</p>\n\n<p>In particular, the dataset consists of a set of images, each named with the specific position of the 2 motors of the robot. A python script for generating visuo-motor trajectories (sequences of data consisting of&nbsp;[image, motor_x, motor_y])&nbsp;from this dataset is available at the following&nbsp;github page: <a href=\"\"></a></p>\n\n<p>Provided with the dataset is also a python script that allows to easily read the images and to generate trajectories (returning</p>\n\n<p>This work has been supported by the EU-H2020 ROMI Project and by the EU-H2020 Marie Sklodowska Curie project &quot;Predictive Robots&quot; (grant agreement no.&nbsp;838861)References:</p>\n\n<p>[1]&nbsp;Schillaci, G., Villalpando, A. P., Hafner, V. V., Hanappe, P., Colliaux, D., &amp; Wintz, T. (2020). Intrinsic Motivation and Episodic Memories for Robot Exploration of High-Dimensional Sensory Spaces. arXiv preprint arXiv:2001.01982.</p>", 
  "author": [
      "family": "Guido Schillaci"
      "family": "Antonio Pico Villalpando"
  "note": "Python scripts for using this dataset can be found here:\n", 
  "version": "0.1", 
  "type": "dataset", 
  "id": "3552827"
All versions This version
Views 146146
Downloads 3636
Data volume 64.7 GB64.7 GB
Unique views 130130
Unique downloads 2727


Cite as