Published August 25, 2022 | Version v1
Conference paper Open

When a CBR in Hand is Better than Twins in the Bush

  • 1. Mälardalen University, Västerås, Sweden
  • 2. Drexel University, Philadelphia, PA, 19802, USA


AI methods referred to as interpretable are often discredited as inaccurate by supporters of the existence
of a trade-off between interpretability and accuracy. In many problem contexts however this trade-off
does not hold. This paper discusses a regression problem context to predict flight take-off delays where
the most accurate data regression model was trained via the XGBoost implementation of gradient boosted
decision trees. While building an XGB-CBR Twin and converting the XGBoost feature importance into
global weights in the CBR model, the resultant CBR model alone provides the most accurate local
prediction, maintains the global importance to provide a global explanation of the model, and offers the
most interpretable representation for local explanations. This resultant CBR model becomes a benchmark
of accuracy and interpretability for this problem context, and hence it is used to evaluate the two additive
feature attribute methods SHAP and LIME to explain the XGBoost regression model. The results with
respect to local accuracy and feature attribution lead to potentially valuable future work.



Files (535.4 kB)

Name Size Download all
535.4 kB Preview Download