datascienceinc/Skater: Enable Interpretability via Rule Extraction(BRL)
Contributors
Others:
Description
-
Skater till now has been an interpretation engine to enable post-hoc model evaluation and interpretation. With this PR Skater starts its journey to support interpretable models. Rule List algorithms are highly popular in the space of Interpretable Models because the trained models are represented as simple decision lists. In the latest release, we enable support for Bayesian Rule Lists(BRL). The probabilistic classifier( estimating P(Y=1|X) for each X ) optimizes the posterior of a Bayesian hierarchical model over the pre-mined rules.
Usage Example:
from skater.core.global_interpretation.interpretable_models.brlc import BRLC import pandas as pd from sklearn.datasets.mldata import fetch_mldata input_df = fetch_mldata("diabetes") ... Xtrain, Xtest, ytrain, ytest = train_test_split(input_df, y, test_size=0.20, random_state=0) sbrl_model = BRLC(min_rule_len=1, max_rule_len=10, iterations=10000, n_chains=20, drop_features=True) # Train a model, by default discretizer is enabled. So, you wish to exclude features then exclude them using # the undiscretize_feature_list parameter model = sbrl_model.fit(Xtrain, ytrain, bin_labels="default")
- Other minor bug fixes and documentation update
Files
datascienceinc/Skater-v1.1.0-b1.zip
Files
(24.3 MB)
Name | Size | Download all |
---|---|---|
md5:c55cd7ca5a914124177d1f1be3097339
|
24.3 MB | Preview Download |
Additional details
Related works
- Is supplement to
- https://github.com/datascienceinc/Skater/tree/v1.1.0-b1 (URL)