CrowdBot_v2: Pedestrian–Robot crowd navigation dataset with pedestrian tracking
Authors/Creators
Description
Overview
CrowdBot_v2 is a revised and extended release of the CrowdBot dataset originally introduced in Paez-Granados et al. (2021), “3D point cloud and RGBD of pedestrians in robot crowd navigation: detection and tracking” (DOI: 10.21227/ak77-d722). It contains synchronized multi-sensor recordings of the Qolo personal mobility robot navigating through real-world crowds in the city of Lausanne (Switzerland) during farmer’s markets and Christmas market days.
The dataset covers crowd densities from light flows of about 0.1 pedestrians per square meter up to around 1.0 ppsm. Qolo is equipped with two 3D LiDARs (front and rear Velodyne VLP-16, 20 Hz) and a forward-facing Intel RealSense D435 RGB-D camera. We provide robot state information (pose, velocity, controller state) together with approximately 250 k frames (≈200 min) of multi-sensor data.
Sensor data and structure
CrowdBot_v2 includes both the raw sensor data and processed outputs:
-
Raw data in ROS bag format from the two VLP-16 LiDARs and the defaced forward-facing RGB-D camera.
-
Synchronized and calibrated multi-sensor streams, eliminating the temporal and spatial misalignment present in the original release.
-
A unified directory structure:
-
rosbags_<sequence>– raw rosbags (defaced_<stamp>.bag) -
processed_<sequence>– processed data per sequence-
alg_res/– algorithm results-
detections/– merged 2D/3D pedestrian detections -
tracks/– pedestrian tracking results
-
-
lidars/,lidars_2d/– LiDAR data -
ped_data/– pedestrian-level proximity and motion metrics -
source_data/– robot TF and timestamps (tf_robot/,timestamp/)
-
-
checkpoints/– trained model checkpoints (*.pth) used in the released pipeline.
-
Detection, tracking and behavioral metrics
Compared to the original CrowdBot dataset, CrowdBot_v2 provides refined people detection and tracking based on a merged 3D/2D Lidar detection pipeline. Pedestrians are detected using a 3D Person-MinkUNet model on LiDAR point clouds combined with 2D detections from DrSPAAM on the RGB-D images. These detections are fused and tracked in 3D using AB3DMOT, leading to substantially fewer false positives and smoother trajectories.
In addition to detections and tracks, we release pedestrian-centric behavioral metrics. For each tracked pedestrian we provide smoothed positions and velocities, as well as proxemic and motion descriptors such as minimum distance to the robot, intrusion counts into a robot comfort zone, angular velocity, jerk, and other indicators of interaction dynamics. Each pedestrian frame is labeled as interacting or non-interacting with the navigating robot, enabling studies of human–robot versus human–human interactions (HRI vs HHI).
RGB-D availability
Unlike the first release of the dataset, CrowdBot_v2 includes both LiDAR and defaced forward-facing RGB-D camera data available in the refined rosbags.
Code and usage
The GitHub repository SCAI-Lab/crowd_analysis_public provides the analysis pipeline used in our accompanying publication, including scripts for dataset integration, pedestrian metric computation, and comparative experiments on human–human and human–robot interactions across the CrowdBot, JRDB, and SiT datasets. To connect a downloaded copy of CrowdBot_v2 with the analysis tools, specify the dataset root path in datasets_configs/data_path_Crowdbot.yaml within the repository.
Intended use
CrowdBot_v2 is intended for research on pedestrian behavior, crowd-robot interaction, detection and tracking in crowds, and evaluation of navigation and proxemic models. When using this dataset, please cite both the original CrowdBot data paper and the present Zenodo record.
Files
rosbags_1203_shared_control_defaced_part2.zip
Files
(30.9 GB)
| Name | Size | Download all |
|---|---|---|
|
md5:f3ac604a914e9d10e8a69aada1529ab7
|
30.9 GB | Preview Download |
Additional details
Additional titles
- Subtitle (English)
- Rosbags 12.03 Shared Control part 2
Related works
- Is new version of
- Dataset: 10.21227/ak77-d722 (DOI)
- Is part of
- Dataset: 10.5281/zenodo.17694140 (DOI)
- Is supplemented by
- Software: https://github.com/SCAI-Lab/crowd_analysis_public (URL)
Funding
- European Commission
- CROWDBOT - Safe Robot Navigation in Dense Crowds 779942
- Innosuisse – Swiss Innovation Agency
- Developing an AI-enabled Robotic Personal Vehicle for Reduced Mobility Population in Complex Environments 103.421 IP-ICT
- Japan Science and Technology Agency
- JST Moonshot R&D program JPMJMS2034-18
Dates
- Created
-
2021-01-27Original CrowdBot dataset
- Updated
-
2025-11-27Crowdbot_v2 release
Software
- Repository URL
- https://github.com/SCAI-Lab/crowd_analysis_public
- Programming language
- Python
- Development Status
- Active
References
- Paez-Granados, D., He, Y., Gonon, D., Huber, L., & Billard, A. (2021). 3D point cloud and RGBD of pedestrians in robot crowd navigation: detection and tracking. IEEE Dataport. https://doi.org/10.21227/ak77-d722