If it’s interpretable it’s pretty much useless.

As always happens, once you notice something you see it everywhere, and from that day I keep seeing and reading here and there blog posts about interpretability, explicability and how all of these concepts are connected to machine learning and trust.

I disagree with pretty much all of them.

The big topic of interpretability in machine learning isn’t new, of course, and during the years a lot of people tried to give a simple yet complete definition.

The one I liked the most comes from a paper of 2017 and defines interpretability as the capacity of interpret, so explaining or presenting in understandable terms, mainly to humans.

In machine learning, then, is about showing textual or visual explanations that provide an understanding of the relationship between the instance’s components and the model’s prediction itself.

Why someone thinks that a model should be interpretable is mainly related to trust: if you don’t understand the inner workings and the decisions behind a prediction you won’t trust it.

If you don’t trust the predictions, of course, you don’t trust the model also.

The core reason behind this is the fear that the model, once in the wild, starts taking strange decisions on unseen samples and if you don’t know how it actually works you don’t know what to do with them.

Why someone thinks that a model should be interpretable is mainly related to trust: if you don’t understand the inner workings and the decisions behind a prediction you won’t trust it.

If your model doesn’t have the same performances on the training set and in the live environment is not a matter of trust, but a problem either in your dataset or in your testing framework.

Trust is built on performances and performances on metrics: design the ones that work for your problem and stick to them.

If you’re looking for trust in interpretability you’re just asking to the model questions you already know the answers, and you want them to be provided in the exact way you are expecting them.

Do you need machine learning for building such a system?.The need of ML arises when you know questions and answers but you don’t know an easy way to get from one to the others.

We need a technique to fake the process, and it might be that an easy explanation for it doesn’t even exist.

The need of ML arises when you know questions and answers but you don’t know an easy way to get from one to the others.

All the human beings are able to distinguish between cats and dogs, for example, but it would be much more difficult (probably impossible) to precisely explain why.

We can provide thousands of examples, we can reproduce the process in our heads million of times, but we need a way to fake this process and to make it replicable in an automated way.

This is what machine learning is about.

Interpretability then, is rather a concept attached to the process, not to the model that fakes it.

If a dataset itself is not interpretable, how can it be the model that learns out of it?Is distinguishing a cat from a dog just a matter of ears and tails?Why some problems are inherently harder to interpret than others is very similar to the problem of why a group of letters together has a meaning and another doesn’t.

Is distinguishing a cat from a dog just a matter of ears and tails?.If it were the case, we wouldn’t even need machine learning in the first place: it’s a rule based system.

I’m not a fanboy, and the more I know about machine learning, trying to build some real products out of it, the most I loose interest in this kind of discussions.

Probably, the only useful thing about ML is in its ability to replicate processes that aren’t easy to describe explicitly: you just need questions and answers, the learning algorithms will do the rest.

Asking for interpretability as a condition for real world usages is undermining the foundations of the whole field.

If the trained model has good performances and it’s not interpretable we are probably on the right track; if it’s interpretable (and the explanation is understandable and replicable) why loosing weeks and GPU power?.Just write some if-else clauses.

If the trained model has good performances and it’s not interpretable we are probably on the right track.

Machine learning is not black magic: we can follow the full path a sample takes from input to output in a Deep Neural Network: there’s nothing hidden or obscure there.

It’s a computable function, and as all the computable functions it is not this interesting.

Its power is all in its complexity.

Pretty much all the direction of research in interpretable machine learning have promising results in explaining the decisions of a model in subsets of the domain built around some specific samples.

We can for example take the gradient with respect to the output of the input vector to highlight regions that are likely to influence the model’s decision.

Or we can compute similarities in the embedding space for Word2Vec models looking for some semantic in that algebra.

Both of them are local and very hard to generalise.

Most importantly, we can only check if these explanations are aligned with things we already know about the inner workings of these processes.

The inability of these techniques to put these explanations all together is exactly why machine learning is useful.

Again, syntax and semantic: why a particular set of characters has a meaning isn’t this hard to explain; why a whole vocabulary has a meaning and how are we able to write each other building sentences out of it is a completely different problem.

If you need someone to trust, look for the guy that has labeled the dataset or check the environment that is providing rewards.

Black-boxes and reliable testing frameworks are all we need.

If you need someone to trust, look for the guy that has labeled the dataset or check the environment that is providing rewards.

The higher the performances, the less we are likely to be able to understand the model.

.

. More details

Leave a Reply