Software Open Access

iml: An R package for Interpretable Machine Learning

Molnar, Christoph

Interpretability methods to analyze the behavior and predictions of any machine learning model.
Implemented methods are:

  • Feature importance described by Fisher et al. (2018)<arXiv:1801.01489>
  • Partial dependence plots described by Friedman (2001) <http://www.jstor.org/stable/2699986>
  • Individual conditional expectation ('ice') plots described by Goldstein et al. (2013)<doi:10.1080/10618600.2014.907095>
  • Local models (variant of 'lime') described by Ribeiro et. al (2016) <arXiv:1602.04938>
  • Shapley Value described by Strumbelj et. al (2014) <doi:10.1007/s10115-013-0679-x>
  • Feature interactions described by Friedman et. al <doi:10.1214/07-AOAS148>
  • Tree surrogate models.

Files (537.5 kB)
Name Size
iml-JOSS.tar.gz
md5:de10f17f47d895bef3df78cfa62893ce
537.5 kB Download
  • Biecek, Przemyslaw. 2018. DALEX: Descriptive mAchine Learning Explanations. https: //CRAN.R-project.org/package=DALEX.a
  • Choudhary, Pramit, Aaron Kramer, and contributors datascience.com team. 2018. "Skater: Model Interpretation Library." https://doi.org/10.5281/zenodo.1198885.
  • Fisher, Aaron, Cynthia Rudin, and Francesca Dominici. 2018. "Model Class Re- liance: Variable Importance Measures for any Machine Learning Model Class, from the "Rashomon" Perspective." http://arxiv.org/abs/1801.01489.
  • Friedman, Jerome H, Bogdan E Popescu, and others. 2008. "Predictive Learning via Rule Ensembles." The Annals of Applied Statistics 2 (3). Institute of Mathematical Statistics:916–54. https://doi.org/10.1214/07-AOAS148.
  • Friedman, Jerome H. 2001. "Greedy Function Approximation: A Gradient Boosting Ma- chine." Annals of Statistics. JSTOR, 1189–1232. https://doi.org/10.1214/aos/1013203451.
  • Goldstein, Alex, Adam Kapelner, Justin Bleich, and Emil Pitkin. 2015. "Peeking In- side the Black Box: Visualizing Statistical Learning with Plots of Individual Condi- tional Expectation." Journal of Computational and Graphical Statistics 24 (1):44–65. https://doi.org/10.1080/10618600.2014.907095.
  • Greenwell, Brandon M. 2017. "Pdp: An R Package for Constructing Partial Depen- dence Plots." The R Journal 9 (1):421–36. https://journal.r-project.org/archive/2017/ RJ-2017-016/index.html.
  • Pedersen, Thomas Lin, and Michaël Benesty. 2017. Lime: Local Interpretable Model- Agnostic Explanations. https://CRAN.R-project.org/package=lime.
  • R Core Team. 2016. R: A Language and Environment for Statistical Computing. Vienna, Austria: R Foundation for Statistical Computing. https://www.R-project.org/.
  • Ribeiro, Marco Tulio, Sameer Singh, and Carlos Guestrin. 2016. "Why Should I Trust You?: Explaining the Predictions of Any Classifier." In Proceedings of the 22nd Acm Sigkdd International Conference on Knowledge Discovery and Data Mining, 1135–44. ACM. https://doi.org/10.1145/2939672.2939778.
  • Strumbelj, Erik, Igor Kononenko, Erik Štrumbelj, and Igor Kononenko. 2014. "Explain- ing prediction models and individual predictions with feature contributions." Knowledge and Information Systems 41 (3):647–65. https://doi.org/10.1007/s10115-013-0679-x.
89
3
views
downloads
All versions This version
Views 8989
Downloads 33
Data volume 1.6 MB1.6 MB
Unique views 8888
Unique downloads 33

Share

Cite as