Dataset Open Access

Visuo-motor dataset recorded from a micro-farming robot

Guido Schillaci; Antonio Pico Villalpando

MARC21 XML Export

<?xml version='1.0' encoding='UTF-8'?>
<record xmlns="">
  <datafield tag="999" ind1="C" ind2="5">
    <subfield code="x">Schillaci, G., Villalpando, A. P., Hafner, V. V., Hanappe, P., Colliaux, D., &amp; Wintz, T. (2020). Intrinsic Motivation and Episodic Memories for Robot Exploration of High-Dimensional Sensory Spaces. arXiv preprint arXiv:2001.01982.</subfield>
  <datafield tag="041" ind1=" " ind2=" ">
    <subfield code="a">eng</subfield>
  <datafield tag="653" ind1=" " ind2=" ">
    <subfield code="a">visuomotor dataset</subfield>
  <controlfield tag="005">20200220072054.0</controlfield>
  <datafield tag="500" ind1=" " ind2=" ">
    <subfield code="a">Python scripts for using this dataset can be found here:</subfield>
  <controlfield tag="001">3552827</controlfield>
  <datafield tag="700" ind1=" " ind2=" ">
    <subfield code="u">Adaptive Systems Group, Humboldt-Universität zu Berlin</subfield>
    <subfield code="a">Antonio Pico Villalpando</subfield>
  <datafield tag="856" ind1="4" ind2=" ">
    <subfield code="s">1797585944</subfield>
    <subfield code="z">md5:ff73f6c6d0e1beef9dd9b11a364b0b3c</subfield>
    <subfield code="u"></subfield>
  <datafield tag="542" ind1=" " ind2=" ">
    <subfield code="l">open</subfield>
  <datafield tag="260" ind1=" " ind2=" ">
    <subfield code="c">2020-02-19</subfield>
  <datafield tag="909" ind1="C" ind2="O">
    <subfield code="p">openaire_data</subfield>
    <subfield code="o"></subfield>
  <datafield tag="100" ind1=" " ind2=" ">
    <subfield code="u">The BioRobotics Institute, Scuola Superiore Sant'Anna, Italy</subfield>
    <subfield code="0">(orcid)0000-0002-0975-1068</subfield>
    <subfield code="a">Guido Schillaci</subfield>
  <datafield tag="245" ind1=" " ind2=" ">
    <subfield code="a">Visuo-motor dataset recorded from a micro-farming robot</subfield>
  <datafield tag="536" ind1=" " ind2=" ">
    <subfield code="c">773875</subfield>
    <subfield code="a">RObotics for MIcrofarms</subfield>
  <datafield tag="536" ind1=" " ind2=" ">
    <subfield code="c">838861</subfield>
    <subfield code="a">Predictive processes for intelligent behaviours, sensory enhancement and subjective experiences in robots</subfield>
  <datafield tag="540" ind1=" " ind2=" ">
    <subfield code="u"></subfield>
    <subfield code="a">Creative Commons Attribution 4.0 International</subfield>
  <datafield tag="650" ind1="1" ind2="7">
    <subfield code="a">cc-by</subfield>
    <subfield code="2"></subfield>
  <datafield tag="520" ind1=" " ind2=" ">
    <subfield code="a">&lt;p&gt;This is the accompanying dataset of the paper [1]&amp;nbsp;describing algorithms for intrinsic motivation and&amp;nbsp;episodic memory on the Sony LettuceThink microfarming robot.&lt;/p&gt;

&lt;p&gt;The LettuceThink microfarming robot developed by Sony Computer Science Laboratories consists of an aluminium frame with an X-Carve CNC machine mounted on it. The CNC machine is used to provide 3-axes movements to a depth camera (Sony DepthSense) mounted at the tip of the vertical z-axis (the end-effector camera). In the experiments presented in the&amp;nbsp;paper, the end-effector camera is facing top-down and only two motors are used (x and y).&lt;/p&gt;

&lt;p&gt;A simulator of the LettuceThink robot has been developed to ease the testing of different configurations of the learning system. The simulator generates sensorimotor data from requested trajectories of the end-effector camera. Knowing the initial position of the CNC machine and the target position, the simulator linearly interpolates the trajectory and returns the intermediate positions of the camera together with the images captured from each specific position. The sensorimotor data returned by the simulator have been prerecorded by performing a full scan of the (x,y) plane of the CNC machine using a resolution of 5mm. This resulted in 24,964 images, each mapped to an (x,y) position of the CNC machine. The dataset published here contains these images.&lt;/p&gt;

&lt;p&gt;In particular, the dataset consists of a set of images, each named with the specific position of the 2 motors of the robot. A python script for generating visuo-motor trajectories (sequences of data consisting of&amp;nbsp;[image, motor_x, motor_y])&amp;nbsp;from this dataset is available at the following&amp;nbsp;github page: &lt;a href=""&gt;;/a&gt;&lt;/p&gt;

&lt;p&gt;Provided with the dataset is also a python script that allows to easily read the images and to generate trajectories (returning&lt;/p&gt;

&lt;p&gt;This work has been supported by the EU-H2020 ROMI Project and by the EU-H2020 Marie Sklodowska Curie project &amp;quot;Predictive Robots&amp;quot; (grant agreement no.&amp;nbsp;838861)References:&lt;/p&gt;

&lt;p&gt;[1]&amp;nbsp;Schillaci, G., Villalpando, A. P., Hafner, V. V., Hanappe, P., Colliaux, D., &amp;amp; Wintz, T. (2020). Intrinsic Motivation and Episodic Memories for Robot Exploration of High-Dimensional Sensory Spaces. arXiv preprint arXiv:2001.01982.&lt;/p&gt;</subfield>
  <datafield tag="773" ind1=" " ind2=" ">
    <subfield code="n">doi</subfield>
    <subfield code="i">isVersionOf</subfield>
    <subfield code="a">10.5281/zenodo.3552826</subfield>
  <datafield tag="024" ind1=" " ind2=" ">
    <subfield code="a">10.5281/zenodo.3552827</subfield>
    <subfield code="2">doi</subfield>
  <datafield tag="980" ind1=" " ind2=" ">
    <subfield code="a">dataset</subfield>
All versions This version
Views 146146
Downloads 3636
Data volume 64.7 GB64.7 GB
Unique views 130130
Unique downloads 2727


Cite as