Requires the future.apply package

find_permuted_perf_metric(
  test_data,
  trained_model,
  outcome_colname,
  perf_metric_function,
  perf_metric_name,
  class_probs,
  feat,
  seed
)

Arguments

test_data

held out test data: dataframe of outcome and features

trained_model

trained model from caret

outcome_colname

Column name as a string of the outcome variable (default NULL; will be chosen automatically).

perf_metric_function

Function to calculate the performance metric to be used for cross-validation and test performance. Some functions are provided by caret (see defaultSummary). Defaults: binary classification = twoClassSummary, multi-class classification = multiClassSummary, regression = defaultSummary.

perf_metric_name

The column name from the output of the function provided to perf_metric_function that is to be used as the performance metric. Defaults: binary classification = "ROC", multi-class classification = "logLoss", regression = "RMSE".

class_probs

whether to use class probabilities

feat

feature or group of correlated features to permute

seed

Random seed (default: NA). Your results will be reproducible if you set a seed.

Value

vector of mean permuted auc and mean difference between test and permuted auc

Author

Begüm Topçuoğlu, topcuoglu.begum@gmail.com

Zena Lapp, zenalapp@umich.edu