Poisoning Object Detection Models for Surface Defect Inspection in Steel Manufacturing
Authors/Creators
Description
As Machine Learning (ML) becomes integral to automated quality assurance, the security of ML models emerges as a critical concern for manufacturing processes. Among the threats posed by adversarial machine learning attacks, data poisoning --- the corruption of training data to introduce malicious behavior in ML models --- represents the most concerning ML-related security risk to the industry.
This paper investigates the vulnerability of ML-based quality assurance systems to data poisoning attacks in manufacturing, with steel surface defect inspection as a use-case. Using a popular object detection model trained on an industrial steel manufacturing image dataset, we evaluate two data poisoning approaches: 1) image poisoning and 2) label poisoning, targeting three adversarial objectives: a) misclassification of defect criticality, b) erroneous size estimation, and c) missed defect detection. Our experiments show that label poisoning is a serious threat to the accuracy of steel defect inspection, potentially leading to significant misevaluation in defect size and defect criticality even when less than 12% of the training data is compromised. On the other hand, we show that image poisoning has little impact on the accuracy of steel defect inspection even when more than half of the samples in a class are poisoned.
Files
root.pdf
Files
(1.2 MB)
| Name | Size | Download all |
|---|---|---|
|
md5:b81a6c505a3185acae36fcc326be4bdf
|
1.2 MB | Preview Download |