Does the evaluation stand up to evaluation? A first-principle approach to the evaluation of classifiers
Description
How can one meaningfully make a measurement, if the meter does not conform to any standard and its scale expands or shrinks depending on what is measured? In the present work it is argued that current evaluation practices for machine-learning classifiers are affected by this kind of problem, leading to negative consequences when classifiers are put to real use; consequences that could have been avoided. It is proposed that evaluation be grounded on Decision Theory, and the implications of such foundation are explored. The main result is that every evaluation metric must be a linear combination of confusion-matrix elements, with coefficients – 'utilities' – that depend on the specific classification problem. For binary classification, the space of such possible metrics is effectively two-dimensional. It is shown that popular metrics such as precision, balanced accuracy, Matthews Correlation Coefficient, Fowlkes-Mallows index, F1-measure, and Area Under the Curve are never optimal: they always give rise to an in-principle avoidable fraction of incorrect evaluations. This fraction is even larger than would be caused by the use of a decision-theoretic metric with moderately wrong coefficients.
Files
dyrland_lundervold_portamana_2022-DecisionTheoryClassifiers.pdf
Files
(2.3 MB)
Name | Size | Download all |
---|---|---|
md5:223a0e1c542e996e8b9157e450172d04
|
2.3 MB | Preview Download |
Additional details
Related works
- Is continued by
- Preprint: 10.31219/osf.io/vct9y (DOI)
- Is supplemented by
- Dataset: 10.17605/osf.io/mfz5w (DOI)