Interpretable Machine Learning

This concept is called feature importance and Permutation Importance is a technique used widely for calculating feature importance.

It helps us to see when our model produces counterintuitive results, and it helps to show the others when our model is working as we’d hope.

Permutation Importance works for many scikit-learn estimators.

The idea is simple: Randomly permutate or shuffle a single column in the validation dataset leaving all the other columns intact.

A feature is considered “important” if the model’s accuracy drops a lot and causes an increase in error.

On the other hand, a feature is considered ‘unimportant’ if shuffling its values don’t affect the model’s accuracy.

WorkingConsider a model that predicts whether a soccer team will have a “Man of the Game” winner or not based on certain parameters.

The player who demonstrates the best play is awarded this title.

Permutation importance is calculated after a model has been fitted.

So, let’s train and fit a RandomForestClassifier model denoted as my_model on the training data.

Permutation Importance is calculated using the ELI5 library.

ELI5 is a Python library which allows to visualize and debug various Machine Learning models using unified API.

It has built-in support for several ML frameworks and provides a way to explain black-box models.

Calculating and Displaying importance using the eli5 library:(Here val_X,val_y denote the validation sets respectively)import eli5from eli5.

sklearn import PermutationImportanceperm = PermutationImportance(my_model, random_state=1).

fit(val_X, val_y)eli5.

show_weights(perm, feature_names = val_X.

columns.

tolist())InterpretationThe features at the top are most important and at the bottom, the least.

For this example, goals scored was the most important feature.

The number after the ± measures how performance varied from one-reshuffling to the next.

Some weights are negative.

This is because in those cases predictions on the shuffled data were found to be more accurate than the real data.

PracticeAnd now, for the complete example and to test your understanding, go to the Kaggle page by clicking the link below:2.

Partial Dependence PlotsThe partial dependence plot (short PDP or PD plot) shows the marginal effect one or two features have on the predicted outcome of a machine learning model( J.

H.

Friedman 2001).

PDPs show how a feature affects predictions.

PDP can show the relationship between the target and the selected features via 1D or 2D plots.

WorkingPDPs are also calculated after a model has been fit.

In the soccer problem that we discussed above, there were a lot of features like passes made, shots taken, goals scored etc.

We start by considering a single row.

Say the row represents a team that had the ball 50% of the time, made 100 passes, took 10 shots and scored 1 goal.

We proceed by fitting our model and calculating the probability of a team having a player that won the “Man of the Game” which is our target variable.

Next, we would choose a variable and continuously alter its value.

For instance, we will calculate the outcome if the team scored 1 goal, 2 goals, 3 goals and so on.

All these values are then plotted and we get a graph of predicted Outcomes vs Goals Scored.

The library to be used for plotting PDPs is called python partial dependence plot toolbox or simply PDPbox.

from matplotlib import pyplot as pltfrom pdpbox import pdp, get_dataset, info_plots# Create the data that we will plotpdp_goals = pdp.

pdp_isolate(model=my_model, dataset=val_X, model_features=feature_names, feature='Goal Scored')# plot itpdp.

pdp_plot(pdp_goals, 'Goal Scored')plt.

show()InterpretationThe Y-axis represents the change in prediction from what it would be predicted at the baseline or leftmost value.

Blue area denotes the confidence intervalFor the ‘Goal Scored’ graph, we observe that scoring a goal increases the probability of getting a ‘Man of the game’ award but after a while saturation sets in.

We can also visualize the partial dependence of two features at once using 2D Partial plots.

Practise3.

SHAP ValuesSHAP which stands for SHapley Additive exPlanation, helps to break down a prediction to show the impact of each feature.

It is based on Shapley values, a technique used in game theory to determine how much each player in a collaborative game has contributed to its success¹.

Normally, getting the trade-off between accuracy and interpretability just right can be a difficult balancing act but SHAP values can deliver both.

WorkingAgain, going with the soccer example where we wanted to predict the probability of a team having a player that won the “Man of the Game”.

SHAP values interpret the impact of having a certain value for a given feature in comparison to the prediction we’d make if that feature took some baseline value.

SHAP values are calculated using the Shap library which can be installed easily from PyPI or conda.

Shap values show how much a given feature changed our prediction (compared to if we made that prediction at some baseline value of that feature).

Let’s say we wanted to know what was the prediction when the team scored 3 goals instead of some fixed baseline no.

If we are able to answer this, we could perform the same steps for other features as follows:sum(SHAP values for all features) = pred_for_team – pred_for_baseline_valuesHence the prediction can be decomposed into a graph like this:Here is a link for a larger viewInterpretationThe above explanation shows features each contributing to push the model output from the base value (the average model output over the training dataset we passed) to the model output.

Features pushing the prediction higher are shown in red, those pushing the prediction lower are in blueThe base_value here is 0.

4979 while our predicted value is 0.

7.

Goal Scored = 2 has the biggest impact on increasing the prediction, whileball possession feature has the biggest effect in decreasing the prediction.

PracticeSHAP values have a deeper theory than what I have explained here.

make sure to through the link below to get a complete understanding.

4.

Advanced Uses of SHAP ValuesAggregating many SHAP values can provide even more detailed insights into the model.

SHAP Summary PlotsTo get an overview of which features are most important for a model we can plot the SHAP values of every feature for every sample.

The summary plot tells which features are most important, and also their range of effects over the dataset.

Summary PlotFor every dot:Vertical location shows what feature it is depictingColor shows whether that feature was high or low for that row of the datasetHorizontal location shows whether the effect of that value caused a higher or lower prediction.

The point in the upper left was for a team that scored few goals, reducing the prediction by 0.

25.

SHAP Dependence Contribution PlotsWhile a SHAP summary plot gives a general overview of each feature, a SHAP dependence plot shows how the model output varies by a feature value.

SHAP dependence contribution plots provide a similar insight to PDP’s, but they add a lot more detail.

The above Dependence Contribution plots suggest that having the ball increases a team’s chance of having their player win the award.

But if they only score one goal, that trend reverses and the award judges may penalize them for having the ball so much if they score that little.

PracticeConclusionMachine Learning doesn’t have to be a black box anymore.

What use is a good model if we cannot explain the results to others.

Interpretability is as important as creating a model.

To achieve wider acceptance among the population, it is crucial that Machine learning systems are able to provide or satisfactory explanations for their decisions.

As Albert Einstein said,” If you can’t explain it simply, you don’t understand it well enough”.

References :Interpretable Machine Learning: A Guide for Making Black Box Models Explainable.

Christoph MolnarMachine Learning Explainability Micro Course: Kaggle.

. More details

Leave a Reply