Published March 16, 2026 | Version v1
Dataset Open

ChestnutDetDataset: a dataset for chestnut detection toward automated picking of on-ground chestnuts

  • 1. ROR icon Michigan State University

Description

Overview

The ChestnutDetDataset was developed to support the development and evaluation of computer vision models for detecting on-ground chestnuts, with the ultimate goal of enabling automated chestnut picking technologies in agricultural robotics. 

The dataset contains high-resolution orchard ground images with manually annotated chestnut instances, enabling benchmarking of real-time object detection models. 

Dataset Contents

The dataset is organized into two folders:

ChestnutDetDataset/

├── Imagery/        # Raw RGB images (.JPG)
└── Annotations/   # Annotation files (.JSON)

  • Images: 319 high-resolution RGB images

  • Resolution: 4032 × 3024 pixels

  • Format: JPG

  • Annotations: JSON files (same filename as corresponding images)

Each image file corresponds directly to an annotation file with the same file name.

Data Collection

The imagery was collected in a commercial chestnut orchard (Owosso, Michigan, USA) during the 2024 chestnut harvest season. Images were acquired using a handheld smartphone during a morning orchard visit. Data collection involved walking through the orchard and recording ground scenes, resulting in diverse visual conditions, including:

  • varying grass coverage
  • different soil backgrounds
  • naturally occurring lighting variations

All images follow a consistent and self-explanatory naming convention.

Annotations

Annotations were created using the VGG Image Annotator (VIA). Trained personnel manually labeled exposed chestnuts (not including those hidden in the unopened burrs) in each image by drawing bounding boxes for individual chestnut instances. Bounding boxes follow the COCO format[x, y, width, height],  where: x, y – coordinates of the top-left corner of the bounding box, width – bounding box width, and height – bounding box height

To ensure annotation quality, the initial annotations were subjected to separate review and correction.

In total, the dataset contains:

  • 319 images
  • 6,524 annotated chestnut instances

Benchmark Study

A benchmark evaluation using this dataset was conducted on a range of real-time object detection models, including:

  • 14 models from the YOLO (v11–v13) family
  • 15 models from the RT-DETR (v1–v4) family

Multiple model scales were tested to assess detection performance in terms of detection accuracy, inference times, and model complexity.  

Full details of the dataset, modeling methodology, and experiments are described in the associated publication: Fang, K., Lu, Y., Mu, X., 2026. Detection of On-Ground Chestnuts Using Artificial Intelligence Toward Automated Picking. AgriEngineering. The software programs and models developed in this study are available at https://github.com/AgFood-Sensing-and-Intelligence-Lab/ChestnutDetection

Citation

If you use this dataset in published research, please consider citing the dataset and/or the associated journal article. 

Contact

We hope ChestnutDetDataset contributes to advancing research in agricultural robotics and automated harvesting systems. Please contact luyuzhen@msu.edu for questions, comments, and suggestions regarding the dataset.

Files

ChestnutDetDataset.zip

Files (2.4 GB)

Name Size Download all
md5:bd857196fc7c8bf485daf0511815cb0b
2.4 GB Preview Download
md5:943646f05744835af25317ed0733af4b
182.9 kB Preview Download

Additional details

Related works

References
Preprint: 10.48550/arXiv.2602.14140 (DOI)

Dates

Collected
2024-10-05
In-orchard image collection