EXPLAINABLE AI FOR FRUIT QUALITY CLASSIFICATION: A COMPARATIVE STUDY OF DEEP LEARNING AND ENSEMBLE METHODS ON IMAGE-DERIVED FEATURES
Authors/Creators
- 1. Suzhou University of Science and Technology, Jiangsu Province, China,
- 2. Suzhou University of Science and Technology, Jiangsu Province, China.
Description
This study conducts an in-depth evaluation of explainable artificial intelligence (XAI) approaches relevant to fruit quality classification with deep learning (DL) approaches as baselines compared to ensemble methods based on image-derived features. Since postharvest losses of fruits are on the rise due to inadequate quality assessments, efficient automated grading systems that are accurate as well as interpretable become increasingly paramount. We deploy convolutional neural networks (CNNs) for direct image analysis along with ensemble methods like Random Forest, XGBoost, and LightGBM for analysis of structured features to enhance predictability. Our novel approach integrates state-of-the-art explainability tools such as SHAP and Grad-CAM that explain the decision-making processes of the models. The findings indicate that though CNNs are above 99% accurate with raw image classification, ensemble models, particularly LightGBM, excel with mean accuracies above 99.29% while using engineered features. This investigation not only bridges the gap between model accuracy along interpretability but also provides actionable recommendations for industry players, thus boosting confidence in AI-based fruit quality estimation tools. The findings indicate the prospects of hybrid XAI models to revolutionize agricultural practice while encouraging efficiency as well as clarity across food supply chains.
Files
v13i1001.pdf
Files
(3.2 MB)
| Name | Size | Download all |
|---|---|---|
|
md5:16b6979656e3fd3757f537596354e058
|
3.2 MB | Preview Download |