Okay, not a perfect score but good enough for me – right now, I’m more interested in the explanations of the model’s predictions. For this, we need to run the
lime()
function and give it
- the text input that was used to construct the model
- the trained model
- the preprocessing function
explainer <- lime(clothing_reviews_train$text, xgb_model, preprocess = get_matrix)
With this, we could right away call the interactive explainer Shiny app, where we can type any text we want into the field on the left and see the explanation on the right: words that are underlined green support the classification, red words contradict them.
I hadn’t used LIME for this before, and it looks very interesting. H/T R-Bloggers