On the Use of Robots and Vision Technologies for the Inspection of Vessels: a Survey on Recent Advances

Vessels are widely used for transporting goods around the world. All cargo vessels are a ﬀ ected by two main defective situations, namely cracks and corrosion. To prevent major damage / accidents, intensive inspection schemes must be carried out periodically, identifying the a ﬀ ected plates for a subsequent repair / replacement. These inspections are performed at a great cost due to the arrangements that allow human inspectors to reach any point of the vessel structure while guaranteeing their physical integrity and respecting all the stipulated safety measures. Technological advances can provide alternatives to facilitate the vessel inspection and reduce the associated cost. This paper surveys approaches which can contribute to the reengineering process of vessel visual inspection focusing on two main aspects: robotic platforms which can be used for the visual inspection of vessels, and computer vision algorithms for the detection of cracks and / or corrosion in images. The di ﬀ erent approaches found in the literature are reviewed and classiﬁed regarding their key features, what allows identifying the main trends which are being applied so far and those which could mean an improvement in the current visual inspection.

The rest of this section tries to provide an horizontal view of the different sensors and techniques applied to the 213 visual inspection using MAVs. We focus on those approaches that go beyond teleoperation and/or pure GPS-based 214 positioning. The aim is not to be complete, but to show the different trends. 215 A widely used sensor is the Light Detection and Ranging (LiDAR) device, also known as laser scanner. The 216 use of this sensor, inherited from ground robotics, allows MAVs for positioning (and sometimes mapping), while a 217 camera is typically used for the inspection task. In combination with GPS and IMU data, Serrano (2011) proposes 218 using LiDAR data for culvert inspection using a MAV. The idea is to operate the robot outdoors, taking off from a 219 military vehicle, and positioning the MAV in front of the culvert entrance, making an intensive use of GPS data. To 220 perform the inspection inside the culvert, where GPS signal is probably not received, the system estimates the MAV state combining the data provided by the LiDAR sensor with IMU and GPS data within an EKF. Then, the operator 222 can use a Pan-Tilt-Zoom (PTZ) camera to perform the inspection.   framework to test the distance and yaw controllers in simulation, prior to using them in real world flights. 241 One of the main drawbacks of using laser scanners in aerial robotics is the relatively heavy weight and ele-  inspection and 3D reconstruction of underground mines. In this approach, a pilot manually operates the MAV through 258 the mine to record sensor data. This is post-processed in order to check the feasibility of flying autonomously with 259 the proposed system and sensors. The experiments performed allow concluding that the vehicle has to be protected 260 from dust and water to operate inside mines, what will increase the platform weight and decrease its autonomy.

261
Furthermore, due to the lack of a wireless communication system able to operate throughout an entire mine, the 262 vehicle has to be autonomous, detecting problems and deciding by itself which solution it should follow. Position-Based Visual Servoing (PBVS) using an EKF and an estimator-free Image-Based Visual Servoing (IBVS).

266
An additional contribution is the use of shared autonomy to permit an un-skilled operator to easily and safely perform 267 the inspection using a MAV. The system, which makes use of monocular visual features (lines) and inertial data for 268 the pole-relative navigation, is in charge of maintaining a safe distance and rejecting environmental disturbances, such 269 as wind gusts.     The authors discuss the potential of this setup for inspecting structures such as bridges from the underside. This 303 approach presents the dynamic model of the entire system, the non-linear controller implemented, and the first flight 304 experiments performed under a bridge and contacting its surface with a sensor head located at the arm.  authors also propose a contact-based inspection planner which computes the optimal route within waypoints while 307 avoiding any obstacles or other occupied zones on the environmental surface. The resulting MAV is able to perform 308 complex contact-based tasks, e.g. "aerial writing" or interactions with non-planar surfaces. This approach has been 309 validated using pose estimates from a motion capture system, while its performance using on-board sensors (like 310 cameras or LiDARs) has not been evaluated yet.

311
Also related with contact-based inspection, Cacace et al. (2015) propose a high-level control system to allow a 312 UAV to autonomously perform complex tasks in close and physical interaction with the environment. This system 313 combines hierarchical task decomposition, mixed-initiative control and path planning techniques to allow reactivity and sliding autonomy. The approach is evaluated in a physical inspection task and in a visual inspection task, both 315 performed under laboratory conditions.

316
As it happens with the last mentioned approaches, some works focus on issues such as the control strategy, task . This is based on the use of the Supervised Autonomy paradigm, which allows the user/pilot to concentrate 339 on the task at hand, issuing displacement commands using a gamepad or a joystick, while the platform is in charge 340 of all the safety-related matters, such as obstacle avoidance. Within this framework, the control pipeline does not 341 require from the estimation of the robot position, which may be difficult to obtain accurately, but only the estimation 342 of its velocity (in the three axes) and height. To estimate the vehicle speed, two optical flow sensors are employed, 343 one looking downwards and the other looking forward, which respectively supply velocity estimations with regard to 344 the floor and to the inspected wall. The flight height is estimated using a laser altimeter. Furthermore, the control architecture implements a set of robotic behaviours in charge of increasing the platform autonomy during the operation 346 (e.g. the go-ahead behaviour makes the vehicle track the indicated speed until an obstacle is reached or the user 347 issues a different displacement command). It is worth mentioning that images collected using such a device are later 348 processed searching for corrosion.  through a ship to aid in fire control. To be precise, the vehicle is able to navigate in dark environments (potentially full 363 of smoke) looking for fires, measuring heat by means of a thermal camera and locating any personnel along the way.

364
To do that, this approach combines an odometry estimation method using depth images provided by an RGB-D camera 365 with inertial data from an IMU. The result is later introduced into a particle filter to perform real-time localization in 366 a given 3D map. Furthermore, the authors discuss a motion-planning method for computing a collision-free trajectory 367 for navigating in narrow and/or dynamic environments. The entire framework is tested both inside their laboratory 368 and in a constrained shipboard environment.

Vision-based Defect Detection Algorithms
Visual inspection is one of the predominant methods used in quality/integrity assessment procedures. It is a 396 subjective process that relies on an inspector's experience and mental focus, making it highly prone to human error.

397
The development of automated inspection technology can overcome these shortcomings.

398
Previous approaches on automatic vision-based defect detection can be roughly classified into two big categories.

399
On the one hand, there are lots of contributions on industrial inspection and quality control; that is to say, algorithms  authors conclude that further work has to be done to reduce the image noise prior to using HOG.  Notice that the appearance of a crack (length, depth, shape, etc.) can be very different from one surface or 550 material to another. For example, a crack that can be found inside a building after suffering an earthquake is very 551 different to the micro-fissures that sometimes arise in an aircraft wing. Furthermore, the control of the camera-surface 552 distance is crucial to know how big the cracks will appear in the images and, therefore, how to configure the algorithm   The authors propose using the Peak Signal-to-Noise Ratio (PSNR) to select among the different filters, so that the 695 most suitable one is applied depending on the specific lighting conditions. The results show that the Bayer filter 696 provides the highest PSNR value for the majority of the images.

697
Another method that does not use any machine learning process is provided by Roberts (2016). As seen in 698 Section 2.2, this approach makes use of a UAV for corrosion detection. The detection algorithm consists in a simple 699 method using a colour threshold in the HSV colour space. Nevertheless, the author provides only qualitative results 700 and indicates that the use of some texture measure probably would improve the performance.   This paper reviews a number of contributions from fields related to the robotization of ship inspection. In the first 747 part, we differentiate between robotic platforms for underwater inspection and those intended for inspecting above 748 the water line (actually, these platforms can be used to inspect the entire vessel hull if this is situated in a dry-dock).