Published April 16, 2026 | Version v1
Dataset Open

HARD-HAT: Socially aware navigation for safer heavy-duty construction.

  • 1. Ingeniarius, Lda
  • 2. Ingeniarius

Description

Multi-Sensor Benchmark for Socially-Aware Navigation (CoHAN vs TEB)

Overview

This dataset provides a multi-session, multi-modal benchmark for socially-aware navigation in human-shared environments. It contains synchronized ROS bag recordings collected from a heavy-duty autonomous robot interacting with pedestrian.

The dataset enables a direct comparison between:

  • TEB (Timed Elastic Band) - geometric navigation baseline; and
  • CoHAN (Cooperative Human-Aware Navigation) - socially-aware planner.

Both planners are evaluated under identical conditions.

Key Contributions

  • Multi-sensor dataset (LiDAR, stereo vision, GNSS-RTK, IMU);
  • Real human–robot interaction scenario;
  • Quantitative benchmarking framework; and
  • Reproducible ROS-based logs

Dataset Structure

  1. TEB_bags.zip (8 sessions, 8.6 GB)
  2. CoHAN_bags.zip (10 sessions, 9.9 GB)
  3. cohan_in_the_field.mp4 (video, 186MB)

Data Collection

A structured data collection campaign was performed over 10 test sessions, each including human presence and interaction. For each run, the following metrics were logged and post-processed:

  • Minimum pedestrian clearance distance (m);
  • Number of close-contact events; and
  • Collision occurrences.

The same protocol was applied for both planners (CoHAN and TEB), ensuring comparability across datasets. During the experimental evaluation, a comprehensive set of ROS topics was recorded to ensure full observability of perception, localisation, planning, and human–robot interaction dynamics.

Vision and Perception Sensors

Camera intrinsic and extrinsic calibration parameters for the stereo camera setup. These topics ensure reproducibility and enable accurate depth reconstruction and image-based perception.

  • zed_node/left/camera_info
  • zed_node/right/camera_info

Rectified and compressed RGB image streams from the stereo camera. These data provide visual context of the environment, including pedestrian appearance, posture, and relative motion.

  • zed_node/left/image_rect_color/compressed
  • zed_node/right/image_rect_color/compressed

Cropped and fused 3D point cloud derived from LiDAR sensing. This topic represents the primary spatial perception input used for obstacle and pedestrian avoidance in the navigation stack.

  • fused_point_cloud_cropped

Localisation and State Estimation

Raw IMU measurements used for high-frequency motion estimation, including linear acceleration and angular velocity.

  • imu_rion

GNSS-RTK outputs providing global position, time synchronisation, and velocity estimates. These topics support global localisation.

  • gps/fix, /gps/time, /gps/vel

Odometry output from the LIO-SAM framework, fusing LiDAR and IMU data. This serves as the primary locally consistent state estimate for navigation and trajectory execution.

  • /lio_sam/imupreintegration/odom

Dynamic and static coordinate frame transformations. These topics define the kinematic relationships between sensors, robot base, and world frames, enabling consistent spatial reasoning across all modules.

  • /tf, /tf_static

Human “Ground Truth”

The “ground-truth” pose of the tracked pedestrian is actually a reference pedestrian pose estimated by the tracking pipeline. 

  • human1/base_pose_ground_truth

Navigation Planning and Control

The global reference path generated by the global planner. This represents the long-horizon navigation objective shared by both controllers.

  • move_base/GlobalPlanner/plan

Local trajectory generated by the HATEB (CoHAN-based) planner, explicitly incorporating human-aware constraints.

  • move_base/HATebLocalPlannerROS/local_plan

Local trajectory generated by the baseline TEB planner, used for comparative evaluation.

  • move_base/TebLocalPlannerROS/local_plan

Time-parameterised future poses of the robot predicted by the HATEB planner, used for forward simulation and human–robot interaction assessment.

  • move_base/HATebLocalPlannerROS/local_plan_fp_poses

Predicted future trajectories of surrounding agents (pedestrians) as modeled by the CoHAN framework.

  • move_base/HATebLocalPlannerROS/agents_local_plans_fp_poses

Discrete planner mode indicator (e.g., nominal navigation, human avoidance, cooperative passing), providing interpretability of planner decision-making.

  • move_base/HATebLocalPlannerROS/mode_text

Benchmarking and Evaluation Metrics

Online computation of robot–pedestrian separation distance, used to derive minimum clearance metrics.

  • navigation_benchmark/distance

Event-based indicator of close-contact situations, enabling quantitative comparison of social compliance between planners. All logs include /tf and GNSS time to support post-hoc alignment and reproducible replay.

  • navigation_benchmark/close_events

Benchmark Metrics

Pedestrian Clearance Distance

Minimum distance between robot and human
Result: +56% improvement (CoHAN vs TEB)

Close-Contact Events

Threshold-based proximity violations
Result: 0 events for both planners

Collision Detection

Defined as footprint overlap
Result: 0 collisions

Panic Cost (using a MATLAB script and ROS bags)

 Interaction discomfort metric
Result: CoHAN lower than TEB

Time-to-Collision (TTC) (using a MATLAB script and ROS bags)

Predictive safety metric
Result: CoHAN higher than TEB

Usage

rosbag play <bag_file>

NLU dataset

For both tests, ROS bag files were recorded containing:

  • Audio streams;
  • ASR transcripts;
  • NLU dialogue acts; and
  • Performance verification prompts and results.

Dataset Structure

  1. NLU_bags.zip (4 sessions)

Test Methodology: Audio Playback

Test Inputs

A WAV file was generated containing:

  • 3 male speakers
  • 5 commands per speaker

Commands:

  • “STOP loader”
  • “MOVE forward”
  • “MOVE back”
  • “TAKE the pallet from storage”
  • “INCREASE velocity”

Total utterances: 15

Test Environments

  • Indoor (No Noise)
    • Quiet room;
    • No machinery active; and
    • Baseline reference.
  • Outdoor (High Noise)
    • External environment;
    • Loader robot powered on; and
    • Diesel engine running continuously.

Files

CoHAN_bags.zip

Files (8.6 GB)

Name Size Download all
md5:5ceb4219e0f04d704e0d25bd942513cd
4.4 GB Preview Download
md5:57eb04ac36954e984d29ca6bb6b019e0
185.8 MB Preview Download
md5:204e7c7ec85c2bd8b9585cd20d047e25
24.1 MB Preview Download
md5:222d762e9ce23144dabac04261fe9b67
3.9 GB Preview Download

Additional details

Identifiers

Other
HARD-HAT

Funding

European Commission
euROBIN - European ROBotics and AI Network 101070596

Dates

Submitted
2026-04-16