interpret: Fit Interpretable Machine Learning Models

Package for training interpretable machine learning models. Historically, the most interpretable machine learning models were not very accurate, and the most accurate models were not very interpretable. Microsoft Research has developed an algorithm called the Explainable Boosting Machine (EBM) which has both high accuracy and interpretable characteristics. EBM uses machine learning techniques like bagging and boosting to breathe new life into traditional GAMs (Generalized Additive Models). This makes them as accurate as random forests and gradient boosted trees, and also enhances their intelligibility and editability. Details on the EBM algorithm can be found in the paper by Rich Caruana, Yin Lou, Johannes Gehrke, Paul Koch, Marc Sturm, and Noemie Elhadad (2015, <doi:10.1145/2783258.2788613>).

Version: 0.1.26
Depends: R (≥ 3.0.0)
Published: 2020-10-12
Author: Samuel Jenkins [aut], Harsha Nori [aut], Paul Koch [aut], Rich Caruana [aut, cre], Microsoft Corporation [cph]
Maintainer: Rich Caruana <interpretml at outlook.com>
BugReports: https://github.com/interpretml/interpret/issues
License: MIT + file LICENSE
URL: https://github.com/interpretml/interpret
NeedsCompilation: yes
SystemRequirements: C++11
CRAN checks: interpret results

Downloads:

Reference manual: interpret.pdf
Package source: interpret_0.1.26.tar.gz
Windows binaries: r-devel: interpret_0.1.26.zip, r-release: interpret_0.1.26.zip, r-oldrel: interpret_0.1.26.zip
macOS binaries: r-release: interpret_0.1.26.tgz, r-oldrel: interpret_0.1.26.tgz
Old sources: interpret archive

Linking:

Please use the canonical form https://CRAN.R-project.org/package=interpret to link to this page.