There is a newer version of the record available.

Published February 8, 2021 | Version 2.0.1
Dataset Open

Robot@Home2, a robotic dataset of home environments

  • 1. Universitiy of Málaga
  • 2. University of Málaga


The Robot-at-Home dataset (Robot@Home, paper here) is a collection of raw and processed data from five domestic settings compiled by a mobile robot equipped with 4 RGB-D cameras and a 2D laser scanner. Its main purpose is to serve as a testbed for semantic mapping algorithms through the categorization of objects and/or rooms.

This dataset is unique in three aspects:

  • The provided data were captured with a rig of 4 RGB-D sensors with an overall field of view of 180°H. and 58°V., and with a 2D laser scanner.
  • It comprises diverse and numerous data: sequences of RGB-D images and laser scans from the rooms of five apartments (87,000+ observations were collected), topological information about the connectivity of these rooms, and 3D reconstructions and 2D geometric maps of the visited rooms.
  • The provided ground truth is dense, including per-point annotations of the categories of the objects and rooms appearing in the reconstructed scenarios, and per-pixel annotations of each RGB-D image within the recorded sequences

During the data collection, a total of 36 rooms were completely inspected, so the dataset is rich in contextual information of objects and rooms. This is a valuable feature, missing in most of the state-of-the-art datasets, which can be exploited by, for instance, semantic mapping systems that leverage relationships like pillows are usually on beds or ovens are not in bathrooms.

Robot@Home Toolbox

The dataset has a toolbox written in python that facilitates queries to the database and the extraction of RGBD images, 3D scenes, scanner data, as well as the application of computer vision and machine learning algorithms among other stuff. 

Version history
v1.0.1 Fixed minor bugs.
v1.0.2  Fixed some inconsistencies in some directory names. Fixes were necessary to automate the generation of the next version.
v2.0.0 SQL based dataset. Robot@Home v1.0.2 has been packed into a sqlite database along with RGB-D and scene files which have been assembled into a hierarchical structured directory free of redundancies. Path tables are also provided to reference files in both v1.0.2 and v2.0.0 directory hierarchies. This version has been automatically generated from version 1.0.2 through the toolbox.
v2.0.1 A forgotten foreign key pair have been added


This work was supported by the Spanish projects "IRO: Improvement of the sensorial and autonomous capability of robots through olfaction" (2012-TEP-530), "PROMOVE: Advances in mobile robotics for promoting independent life of elders" (DPI2014-55826-R), and "WISER: Building and exploiting semantic maps by mobile robots" (DPI2017-84827-R), the European project "MoveCare: Multiple-actors virtual empathic caregiver for the elder" (Call: H2020-ICT-2016-1, contract number: 732158) and by a postdoc contract from the I-PPIT from the University of Malaga.


Files (12.6 GB)

Name Size Download all
144.8 MB Download
12.5 GB Download

Additional details


MoveCare – Multiple-actOrs Virtual Empathic CARgiver for the Elder 732158
European Commission


  • Aldoma, A., Faulhammer, T. & Vincze, M. (2014), Automation of "ground truth" annotation for multiview RGB-D object instance recognition datasets, in 'Intelligent Robots and Systems (IROS 2014), 2014 IEEE/RSJ International Conference on', pp. 5016– 5023.
  • Almeida, L., Ji, J., Steinbauer, G. & Luke, S. (2016), RoboCup 2015: Robot World Cup XIX, Lecture Notes in Computer Science.
  • Anand, A., Koppula, H. S., Joachims, T. & Saxena, A. (2013), 'Contextually guided semantic labeling and search for three-dimensional point clouds', In the International Journal of Robotics Research 32(1), 19– 34.
  • Arlot, S. & Celisse, A. (2010), 'A survey of crossvalidation procedures for model selection', Statistics Surveys 4, 40–79.
  • ASUS (2015), 'Xtion PRO LIVE', http://www.asus. com/Multimedia/Xtion\_PRO\_LIVE/. [Online; accessed 13-December-2016].
  • Bo, L., Ren, X. & Fox, D. (2013), Unsupervised feature learning for RGB-D based object recognition, in 'Experimental Robotics', Springer, pp. 387–402.
  • Cadena, C., Carlone, L., Carrillo, H., Latif, Y., Scaramuzza, D., Neira, J., Reid, I. D. & Leonard, J. J. (2016), 'Simultaneous localization and mapping: Present, future, and the robust-perception age', arXiv preprint arXiv:1606.05830 .
  • Carreira, J. & Sminchisescu, C. (2012), 'CPMC: Automatic object segmentation using constrained parametric min-cuts', IEEE Transactions on Pattern Analysis and Machine Intelligence 34(7), 1312–1328.
  • Castellanos, J. A. & Tardos, J. D. (2012), Mobile robot localization and map building: A multisensor fusion approach, Springer Science & Business Media.
  • de la Puente, P., Bajones, M., Einramhof, P., Wolf, D., Fischinger, D. & Vincze, M. (2014), RGB-D sensor setup for multiple tasks of home robots and experimental results, in 'Intelligent Robots and Systems (IROS 2014), 2014 IEEE/RSJ International Conference on', pp. 2587–2594.
  • Everingham, M., van Gool, L., Williams, C., Winn, J. & Zisserman, A. (2010), 'The PASCAL Visual Object Classes (VOC) challenge', International Journal of Computer Vision 88(2), 303–338.
  • Fernandez-Moral, E., González-Jiménez, J., Rives, P. & Arévalo, V. (2014), Extrinsic calibration of a set of range cameras in 5 seconds without pattern, in '2014 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2014)', Chicago, USA.
  • Fischler, M. A. & Bolles, R. C. (1981), 'Random sample consensus: A paradigm for model fitting with applications to image analysis and automated cartography', Communications of the ACM 24(6), 381–395.
  • Galindo, C. & Saffiotti, A. (2013), 'Inferring robot goals from violations of semantic knowledge', Robotics and Autonomous Systems 61(10), 1131–1143.
  • Garcia-Garcia, A., Orts-Escolano, S., Oprea, S., GarciaRodriguez, J., Azorin-Lopez, J., Saval-Calvo, M. & Cazorla, M. (2016), 'Multi-sensor 3D object dataset for object recognition with full pose estimation', Neural Computing and Applications pp. 1–12.
  • Giraff Technologies AB (2015), 'Giraff robot', http:// [Online; accessed 13-December2015].
  • Gómez-Ojeda, R., Briales, J., Fernández-Moral, E. & González-Jiménez, J. (2015), Extrinsic calibration of a 2D laser-rangefinder and a camera based on scene corners, in 'IEEE International Conference on Robotics and Automation (ICRA)', Seattle, USA.
  • Hinterstoisser, S., Lepetit, V., Ilic, S., Holzer, S., Bradski, G., Konolige, K. & Navab, N. (2013), Model based training, detection and pose estimation of texture-less 3D objects in heavily cluttered scenes, in 'Proceedings of the 11th Asian Conference on Computer Vision - Volume Part I', ACCV'12, SpringerVerlag, Berlin, Heidelberg, pp. 548–562.
  • Hokuyo Automatic Co. (2015), 'Hokuyo URG-04LXUG01', [Online; accessed 06-April-2015].
  • Jaimez, M., Blanco, J.-L. & González-Jiménez, J. (2015), 'Efficient reactive navigation with exact collision determination for 3D robot shapes', International Journal of Advanced Robotic Systems 12(63).
  • Jaimez, M. & González-Jiménez, J. (2015), 'Fast visual odometry for 3-D range sensors', IEEE Transactions on Robotics 31(4), 809–822.
  • Janoch, A., Karayev, S., Jia, Y., Barron, J. T., Fritz, M., Saenko, K. & Darrell, T. (2011), A categorylevel 3-D object dataset: Putting the Kinect to work, in '1st Workshop on Consumer Depth Cameras for Computer Vision (ICCV workshop)'.
  • J.L. Blanco Claraco (2015), 'Mobile Robot Programming Toolkit (MRPT)', [Online; accessed 28-April-2015].
  • Kammerl, J., Blodow, N., Rusu, R. B., Gedikli, S., Beetz, M. & Steinbach, E. (2012), Real-time compression of point cloud streams, in '2012 IEEE International Conference on Robotics and Automation', pp. 778–785.
  • Kasper, A., Xue, Z. & Dillmann, R. (2012), 'The KIT object models database: An object model database for object recognition, localization and manipulation in service robotics', The International Journal of Robotics Research 31(8), 927–934.
  • Kiselev, A., Kristoffersson, A., Melendez-Fernandez, F., Galindo, C., Loufti, A., Gonz´alez-Jim´enez, J. & Coradeschi, S. (2015), Evaluation of using semiautonomy features in mobile robotic telepresence systems, in '7th IEEE International Conference on Cybernetics and Intelligent Systems (CIS) and the 7th IEEE International Conference on Robotics, Automation and Mechatronics (RAM)'.
  • Lai, K., Bo, L. & Fox, D. (2014), Unsupervised feature learning for 3D scene labeling, in 'Robotics and Automation (ICRA), 2014 IEEE International Conference on', pp. 3050–3057.
  • Lai, K., Bo, L., Ren, X. & Fox, D. (2011), A largescale hierarchical multi-view RGB-D object dataset, in 'Robotics and Automation (ICRA), 2011 IEEE International Conference on', pp. 1817–1824.
  • Martinez-Gomez, J., Cazorla, M., Garcia-Varea, I. & Morell, V. (2014), Overview of the ImageCLEF 2014 Robot Vision Task, in 'CLEF 2014 Evaluation Labs and Workshop, Online Working Notes'.
  • Martinez-Gomez, J., Cazorla, M., Garcia-Varea, I. & Morell, V. (2015), 'ViDRILO: The visual and depth robot indoor localization with objects information dataset', International Journal of Robotics Research
  • Meger, D. & Little, J. J. (2012), The UBC visual robot survey: A benchmark for robot category recognition, in 'Experimental Robotics - The 13th International Symposium on Experimental Robotics, ISER 2012, June 18-21, 2012, Qu´ebec City, Canada', pp. 979– 991.
  • Mekuria, R. & Cesar, P. (2016), MP3DG-PCC, open source software framework for implementation and evaluation of point cloud compression, in 'Proceedings of the 2016 ACM on Multimedia Conference', MM '16, pp. 1222–1226.
  • Melendez-Fernandez, F., Galindo, C. & González-Jiménez, J. (2016), An assisted navigation method for telepresence robots, in '10th International Conference on Ubiquitous Computing and Ambient Intelligence'.
  • Mura, C., Mattausch, O., Villanueva, A. J., Gobbetti, E. & Pajarola, R. (2014), 'Automatic room detection and reconstruction in cluttered indoor environments with complex room layouts', Computers & Graphics 44, 20 – 32.
  • Oliveira, M., Lopes, L. S., Lim, G. H., Kasaei, S. H., Sappa, A. D. & Tom, A. M. (2015), Concurrent learning of visual codebooks and object categories in open-ended domains, in '2015 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)', pp. 2488–2495.
  • Pronobis, A., Jensfelt, P., Sjöö, K., Zender, H., Kruijff, G.-J. M., Mozos, O. M. & Burgard, W. (2010), Semantic modelling of space, in H. I. Christensen, G.- J. M. Kruijff & J. L. Wyatt, eds, 'Cognitive Systems', Vol. 8 of Cognitive Systems Monographs, Springer Berlin Heidelberg, pp. 165–221.
  • Ruiz-Sarmiento, J., Galindo, C. & González-Jiménez, J. (2014), Mobile robot object recognition through the synergy of probabilistic graphical models and semantic knowledge, in 'European Conf. on Artificial Intelligence. Workshop on Cognitive Robotics'.
  • Ruiz-Sarmiento, J. R., Galindo, C. & González-Jiménez, J. (2015a), 'Exploiting semantic knowledge for robot object recognition', Knowledge-Based Systems 86, 131–142.
  • Ruiz-Sarmiento, J. R., Galindo, C. & González-Jiménez, J. (2015b), Joint categorization of objects and rooms for mobile robots, in 'IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)'.
  • Ruiz-Sarmiento, J. R., Galindo, C. & González-Jiménez, J. (2015c), OLT: A Toolkit for Object Labeling Applied to Robotic RGB-D Datasets, in 'European Conference on Mobile Robots'.
  • Ruiz-Sarmiento, J. R., Galindo, C. & González-Jiménez, J. (2016), 'Building Multiversal Semantic Maps for Mobile Robot Operation', Knowledge-Based Systems .
  • Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M. S., Berg, A. C. & Fei-Fei, L. (2014), 'ImageNet large scale visual recognition challenge', CoRR abs/1409.0575.
  • Russell, B. C., Torralba, A., Murphy, K. P. & Freeman, W. T. (2008), 'LabelMe: A database and web-based tool for image annotation', Int. J. Comput. Vision 77(1-3), 157–173.
  • Segal, A., Haehnel, D. & Thrun, S. (2009), Generalized ICP, in 'Proceedings of Robotics: Science and Systems', Seattle, USA.
  • Silberman, N. & Fergus, R. (2011), Indoor scene segmentation using a structured light sensor, in 'Proceedings of the International Conf. on Computer Vision - Workshop on 3D Representation and Recognition'.
  • Silberman, N., Hoiem, D., Kohli, P. & Fergus, R. (2012), Indoor Segmentation and Support Inference from RGBD Images, in 'Proc. of the 12th European Conference on Computer Vision (ECCV 2012)', pp. 746–760.
  • Singh, A., Sha, J., Narayan, K., Achim, T. & Abbeel, P. (2014), BigBIRD: A large-scale 3D database of object instances, in 'Robotics and Automation (ICRA), 2014 IEEE International Conference on', pp. 509– 516.
  • Teichman, A., Miller, S. & Thrun, S. (2013), Unsupervised intrinsic calibration of depth sensors via SLAM, in 'Proceedings of Robotics: Science and Systems', Berlin, Germany
  • Xiao, J., Owens, A. & Torralba, A. (2013), SUN3D: A database of big spaces reconstructed using SfM and object labels, in 'Computer Vision (ICCV), 2013 IEEE International Conference on', pp. 1625–1632.