Cristoph Molnar shows off a couple of R packages which help interpret ML models:

Machine learning models repeatedly outperform interpretable, parametric models like the linear regression model. The gains in performance have a price: The models operate as black boxes which are not interpretable.

Fortunately, there are many methods that can make machine learning models interpretable. The R package

`iml`

provides tools for analysing any black box machine learning model:

- Feature importance: Which were the most important features?
- Feature effects: How does a feature influence the prediction? (Partial dependence plots and individual conditional expectation curves)
- Explanations for single predictions: How did the feature values of a single data point affect its prediction? (LIME and Shapley value)
- Surrogate trees: Can we approximate the underlying black box model with a short decision tree?
- The iml package works for any classification and regression machine learning model: random forests, linear models, neural networks, xgboost, etc.

This is a must-read if you’re getting into model-building. H/T R-Bloggers

Comments closed