Enhanced Defect Detection in Airport Runway Infrastructure Using Image-Text Pairing
Creators
Description
Maintaining runway infrastructure is vital for air transport safety, with defects like cracks and tire marks posing significant risks to the take-off/landing process. Researchers have proposed various methods for automatic detection of surface defects, using computer vision and machine learning. However, they often require explicitly annotated datasets that demand significant workload and field expertise. Additionally, the detection outcome usually follows the low-level training labels scheme to describe the detected defects, and requires post-processing for high-level semantic information extraction, such as damage level estimation. In this work, we present a novel method for defect detection and damage severity estimation on runway surfaces, leveraging the Contrastive Language-Image Pre-training (CLIP) architecture for image-text pairing. Our model processes runway images and attaches text descriptions mentioning detected defects and severity level, identifying three defect types (crack, joint, and tire mark) and categorizing damage severity into three levels (low, medium, and high). Utilizing natural language annotations simplifies the labeling process, eliminating the need for labor-intensive low-level image-based annotations. The model exploits the high-level natural language annotation for direct estimation of damage severity and delivers high-level semantic information to the end-user as text, providing a comprehensive runway condition assessment tool. The proposed method demonstrates high performance across various test sets, posing a valuable human-centric approach for efficient defect detection and damage estimation on runway surfaces.
Files
CBMI_2024_Rev-2.pdf
Files
(3.1 MB)
Name | Size | Download all |
---|---|---|
md5:bcfe8ebbee64031e0b54b8fcd406f165
|
3.1 MB | Preview Download |
Additional details
Funding
Dates
- Accepted
-
2024-07