Published March 12, 2024 | Version 3.0.0
Dataset Open

SubPipe: A Submarine Pipeline Inspection Dataset for Segmentation and Visual-inertial Localization

  • 1. ROR icon Aarhus University
  • 2. EIVA a/s
  • 3. ROR icon RWTH Aachen University
  • 4. OceanScan Marine Systems & Technology

Description

Abstract

This paper presents SubPipe, an underwater dataset for SLAM, object detection, and image segmentation. 

SubPipe has been recorded using a lightweight autonomous underwater vehicle (LAUV), operated by OceanScan MST, and carrying a sensor suite including two cameras, a side-scan sonar, and an inertial navigation system, among other sensors. The AUV has been deployed in a pipeline inspection environment with a submarine pipe partially covered by sand. The AUV's pose ground truth is estimated from the navigation sensors. The side-scan sonar and RGB images include object detection and segmentation annotations, respectively. State-of-the-art segmentation, object detection, and SLAM methods are benchmarked on SubPipe to demonstrate the dataset's challenges and opportunities for leveraging computer vision algorithms.
To the authors' knowledge, this is the first annotated underwater dataset providing a real pipeline inspection scenario. The dataset and experiments are publicly available online.

On Zenodo we provide three versions for SubPipe. One is the full version (SubPipe.zip, ~80GB unzipped) and two subsamples: SubPipeMini.zip, ~12GB unzipped and SubPipeMini2.zip, ~16GB unzipped. Both subsamples are only parts of the entire dataset (SubPipe.zip). SubPipeMini is a subset, containing semantic segmentation data, and it has interesting camera data of the underwater pipeline. On the other hand, SubPipeMini2 is mainly focused on underwater side-scan sonar images of the seabed including ground truth object detection bounding boxes of the pipeline.

For (re-)using/publishing SubPipe, please include the following copyright text:

SubPipe is a public dataset of a submarine outfall pipeline, property of Oceanscan-MST. This dataset was acquired with a Light Autonomous Underwater Vehicle by Oceanscan-MST, within the scope of Challenge Camp 1 of the H2020 REMARO project.

More information about OceanScan-MST can be found at this link.

Cam0 — GoPro Hero 10

Camera parameters:

  • Resolution: 1520×2704
  • fx = 1612.36
  • fy = 1622.56
  • cx = 1365.43
  • cy = 741.27
  • k1,k2, p1, p2 = [−0.247, 0.0869, −0.006, 0.001]

Side-scan Sonars

Each sonar image was created after 20 “ping” (after every 20 new lines) which corresponds to approx. ~1 image / second.

Regarding the object detection annotations, we provide both COCO and YOLO formats for each annotation. A single COCO annotation file is provided per each chunk and per each frequency (low frequency vs. high frequency), whereas the YOLO annotations are provided for each SSS image file.

Metadata about the side-scan sonar images contained in this dataset:

Images for object detection  
# Low Frequency (LF):   5000
LF image size: 2500 × 500
# High Frequency (HF):   5030
HF Image size 5000 × 500
Total number of images: 10030
Annotations
 
# Low Frequency:   3163
# High Frequency:   3172
Total number of annotations:   6335

Files

SubPipe.zip

Files (38.7 GB)

Name Size Download all
md5:abf70702276c69c04c8774c7832f8550
27.8 GB Preview Download
md5:8c63dac2e37dea0bbfda13fa6602b264
5.9 GB Preview Download
md5:7e0d925f93a89bc0e8715e4f6f7caecb
4.9 GB Preview Download

Additional details

Funding

REMARO – Reliable AI for Marine Robotics 956200
European Commission

References

  • M. Aubard et al., "Real-time automatic wall detection and localization based on side scan sonar images," in 2022 IEEE/OES Autonomous Underwater Vehicles Symposium (AUV), pp. 1–6, IEEE, 2022.
  • M. Cordts et al., "The cityscapes dataset for semantic urban scene understanding," in Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 3213–3223, 2016.
  • P. Drews-Jr et al., "Underwater image segmentation in the wild using deep learning," Journal of the Brazilian Computer Society, vol. 27, no. 1, pp. 1–14, 2021.
  • M. J. Islam et al., "Semantic segmentation of underwater imagery: Dataset and benchmark," in 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 1769–1776, IEEE, 2020.
  • M. Ferrera et al., "Aqualoc: An underwater dataset for visual–inertial– pressure localization," The International Journal of Robotics Research, vol. 38, no. 14, pp. 1549–1559, 2019.
  • O. Alvarez Tunon et al., "MIMIR-UW: A multipurpose synthetic dataset for underwater navigation and inspection," 2023 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2023 (In publication).
  • M. Bernardi et al., "Aurora, a multi-sensor dataset for robotic ocean exploration," The International Journal of Robotics Research, p. 02783649221078612, 2022.
  • J. Hong et al., "Trashcan: A semantically-segmented dataset towards visual detection of marine debris," arXiv:2007.08097, 07 2020.
  • A. Mallios et al., "Underwater caves sonar data set," The International Journal of Robotics Research, vol. 36, no. 12, pp. 1247–1251, 2017.
  • E. Xie et al., "Segformer: Simple and efficient design for semantic segmentation with transformers," Advances in Neural Information Processing Systems, vol. 34, pp. 12077–12090, 2021.
  • L.-C. Chen et al., "Rethinking atrous convolution for semantic image segmentation," arXiv preprint arXiv:1706.05587, 2017.
  • C. Campos et al., "Orb-slam3: An accurate open-source library for visual, visual–inertial, and multimap slam," IEEE Transactions on Robotics, vol. 37, no. 6, pp. 1874–1890, 2021.
  • O. Alvarez Tunon et al., "Loss it right: Euclidean and riemannian metrics in learning-based visual odometry," 2023 International Symposium on Robotics (ISR), (In publication) 2023.
  • G. J. Brostow, J. Fauqueur, and R. Cipolla, "Semantic object classes in video: A high-definition ground truth database," Pattern Recognition Letters, vol. 30, no. 2, pp. 88–97, 2009. Video-based Object and Event Analysis.
  • U. do Porto Faculdade de Engenharia LSTS, "Imc navigation messages - estimated state." https://lsts.pt/docs/imc/master/Navigation.html#estimated-state, 2024-01-24.
  • K. Wada, "Labelme: Image Polygonal Annotation with Python." https://github.com/wkentaro/labelme, 2016.
  • A. Geiger, P. Lenz, and R. Urtasun, "Are we ready for autonomous driving? the kitti vision benchmark suite," in 2012 IEEE conference on computer vision and pattern recognition, pp. 3354–3361, IEEE, 2012.
  • W. Wang, D. Zhu, X. Wang, Y. Hu, Y. Qiu, C. Wang, Y. Hu, A. Kapoor, and S. Scherer, "Tartanair: A dataset to push the limits of visual slam," in 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 4909–4916, IEEE, 2020.
  • M. Burri, J. Nikolic, P. Gohl, T. Schneider, J. Rehder, S. Omari, M. W. Achtelik, and R. Siegwart, "The euroc micro aerial vehicle datasets," The International Journal of Robotics Research, vol. 35, no. 10, pp. 1157–1163, 2016.
  • J. Engel, V. Koltun, and D. Cremers, "Direct sparse odometry," IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 40, no. 3, pp. 611–625, 2018.
  • W. Wang, Y. Hu, and S. Scherer, "Tartanvo: A generalizable learningbased vo," in Proceedings of the 2020 Conference on Robot Learning (J. Kober, F. Ramos, and C. Tomlin, eds.), vol. 155 of Proceedings of Machine Learning Research, pp. 1761–1772, PMLR, 16–18 Nov 2021.
  • D. Sun, X. Yang, M.-Y. Liu, and J. Kautz, "Pwc-net: Cnns for optical flow using pyramid, warping, and cost volume," in Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 8934–8943, 2018.
  • K. He, X. Zhang, S. Ren, and J. Sun, "Deep residual learning for image recognition," CoRR, abs/1512, vol. 3385, p. 2, 2015.
  • O. Alvarez-Tu ´ n˜on, Y. Brodskiy, and E. Kayacan, "Monocular visual ´ simultaneous localization and mapping:(r) evolution from geometry to deep learning-based pipelines," IEEE Transactions on Artificial Intelligence, 2023.
  • J. Sturm, N. Engelhard, F. Endres, W. Burgard, and D. Cremers, "A benchmark for the evaluation of rgb-d slam systems," in 2012 IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 573–580, IEEE, 2012.
  • Z. Ge, S. Liu, F. Wang, Z. Li, and J. Sun, "YOLOX: exceeding YOLO series in 2021," CoRR, vol. abs/2107.08430, 2021.