Understanding how IME (Shapley Values) explains predictions

Finally, note that the two contributions sum up to the initial difference between the prediction for this instance and the expected prediction when the feature values are unknown: φ₁+φ₂=3/8–1/8=1/4=1–3/4=f(1,0)-E[f(X₁,X₂)], which was assured by axiom 1.In summary, the explanation produced by IME for the prediction of the instance (1,0) is that the model’s decision is influenced by both features, being the first feature more decisive in favor of (correctly) predicting class 1 than the second feature, whose value was against this decision.This was a very simple example that we’ve been able to compute analytically, but these won’t be possible in real applications, in which we will need the approximated solution by the algorithm..In future posts I will explain how to explain predictions with Shapley Values using R and Python.. More details

Leave a Reply