Get model performance metrics
evaluate(x, ...) # S3 method for predicted_df evaluate(x, ...) # S3 method for model_list evaluate(x, ...)
x | Object to be evaluted |
---|---|
... | Not used |
This function gets model performance from a model_list object that
comes from machine_learn
, tune_models
,
flash_models
, or a data frame of predictions from
predict.model_list
. For the latter, the data passed to
predict.model_list
must contain observed outcomes. If you have
predictions and outcomes in a different format, see
evaluate_classification
or evaluate_regression
instead.
You may notice that evaluate(models)
and
evaluate(predict(models))
return slightly different performance
metrics, even though they are being calculated on the same (out-of-fold)
predictions. This is because metrics in training (returned from
evaluate(models)
) are calculated within each cross-validation fold
and then averaged, while metrics calculated on the prediction data frame
(evaluate(predict(models))
) are calculated once on all observations.
models <- machine_learn(pima_diabetes[1:40, ], patient_id, outcome = diabetes, models = "rf", tune_depth = 3)#>#>#>#>#> #>#>evaluate(models)#> AUPR AUROC #> 0.5495 0.6650predictions <- predict(models, newdata = pima_diabetes[41:50, ])#>evaluate(predictions)#> AUPR AUROC #> 0.5694444 0.9523810