Published March 17, 2025 | Version v1
Dataset Open

[Dataset] Towards Robotic Mapping of a Honeybee Comb

  • 1. Faculty of Electrical Engineering, Czech Technical University in Prague
  • 2. University of Manchester
  • 3. Durham University, Computer Science Department
  • 4. Artificial Life Lab, Department of Zoology, Institute of Biology, University of Graz
  • 5. Durham University

Description

"Towards Robotic Mapping of a Honeybee Comb" Dataset

This dataset supports the analyses and experiments of the paper:

J. Janota et al., "Towards Robotic Mapping of a Honeybee Comb," 2024 International Conference on Manipulation, Automation and Robotics at Small Scales (MARSS), Delft, Netherlands, 2024, doi: 10.1109/MARSS61851.2024.10612712.

Link to Paper   |  Link to Code Repository

Cell Detection

The celldet_2023 dataset contains a total of 260 images of the honeycomb (at resolution 67 µm per pixel), with masks from the ViT-H Segment Anything Model (SAM) and annotations for these masks. The structure of the dataset is following:

celldet_2023
├── {image_name}.png
├── ...
├── masksH (folder with masks for each image)
├────{image_name}.json
├────...
├── annotations
├────annotated_masksH (folder with annotations for training images)
├──────{image_name in training part}.csv
├──────...
├────annotated_masksH_val  (folder with annotations for validation images)
├──────{image_name in validation part}.csv}
├──────...
├────annotated_masksH_test  (folder with annotations for test images)
├──────{image_name in test part}.csv}
├──────...

Masks

For each image there is a .json file that contains all the masks produced by the SAM for the particular image, the masks are in COCO Run-Length Encoding (RLE) format.

Annotations

The annotation files are split into folders based on whether they were used for training, validation or testing. For each image (and thus also for each .json file with masks), there is a .csv file with two columns:

Column id Description
0 order id of the mask in the corresponding .json file
1 mask label: 1 if fully visible cell, 2 if partially occluded cell, 0 otherwise

Loading the Dataset

For an example of loading the data, see the data loader in the paper repository:

python cell_datasetV2.py --img_dir </path/celldet_2023> --mask_dir <path/celldet_2023/masksH> --ann_dir <path/celldet_2023/annotations/annotated_masksH_test>

 

Image Stitching

The stitching_2023 dataset contains scans of the honeycomb (in the scans folder) and pairwise registrations for a subset of comb images, both for inaccurate odometry (in the IS1 folder) and a more accurate one (in the IS2 folder) collected with a newer fixed setup.

Scans

We provide a total of 8 full scans from both sides of the honeycomb (3 scans on side 0 and 5 scans on side 1).
Scans on side 0 contain a total of 90 images of the comb; on  side 1, the scans contain 80 images. All images were taken at a resolution of 67 µm per pixel. Each scan folder also contains a camera_info.json file, containing the camera parameters, and a scan_info.csv file with the following columns:

Column name Description
img_id ID of the image in the comb scan
camera_position.x position of the camera along the x-axis in the coordinate frame of the hive
camera_position.y position of the camera along the y-axis in the coordinate frame of the hive
xy_position.x motor position of the camera along the x-axis
xy_position.y motor position of the camera along the y-axis

Annotations

We provide annotations for 258 image pairs. The annotations are split into two folders based on the system setup that was used (IS1 - inaccurate odometry, IS2 - fixed odometry).
The folders contain .csv files, each such file contains pairwise registration annotations for one of the scans (identifiable by file name).
The annotation .csv files contain four columns:

Column name Description
img1_id ID of the first image in the comb scan
img2_id ID of the second image in the comb scan
shift_y translation along the y-axis between the images
shift_x translation along the x-axis between the images

Loading the Dataset

For an example of loading and working with the data, see the evaluation script in the paper repository.

License and Citation

If you have any issues or concerns about the data, please contact: janotjir@fel.cvut.cz
If you find the data/code useful, please cite:

J. Janota et al., "Towards Robotic Mapping of a Honeybee Comb," 2024 International Conference on Manipulation, Automation and Robotics at Small Scales (MARSS), Delft, Netherlands, 2024, doi: 10.1109/MARSS61851.2024.10612712.

Files

celldet_2023.zip

Files (1.0 GB)

Name Size Download all
md5:621c8c9da7fa763b5d6b8b0fb7bc33ba
270.5 MB Preview Download
md5:39238a75dcb7f7ec15179dcc68f50940
4.9 kB Preview Download
md5:1952148687e81a755f75038a459d12f1
739.3 MB Preview Download

Additional details

Related works

Is supplement to
Conference paper: 10.1109/MARSS61851.2024.10612712 (DOI)

Funding

European Commission
RoboRoyale - ROBOtic Replicants for Optimizing the Yield by Augmenting Living Ecosystems 964492

Dates

Collected
2023-07/2023-10

Software