We are ready for Machine Learning Explainability?AlbertBlockedUnblockFollowFollowingApr 8The new European Union General Data Protection Regulation (GPDR, General Data Protection Regulation) includes regulations on how to use Machine Learning.
These regulations aim to give control of personal data to the user introducing the Right to explanation.
European Union HQRight to explanation is a demand from the European Union to make Artificial Intelligence more transparent and ethical.
This regulation promotes to build Algorithms that ensures an explanation for every Machine Learning decision.
Explainability is still without a consensus on how an explanation needs to look like.
As for instance, traditional ML-models, most of the time, are limited to produce a binary output or a scoring output(Accuracy, F1-Score…) Given a ML-Model to grant a credit its binary output is YES: approval of the credit /NO: denial of the credit.
On the other hand, a more explainable output will tell why the credit is approved or denied (Figure 1)Figure 1: Example of an explainable outputThe previous output can be generated easily using Decision Trees or Classification Rules.
But it’s difficult to generate/extract this explanation from High Accuracy ML-Algorithms like Neural Networks or Gradient Boosting Classifiers (these algorithms are also referred to as black-box algorithms).
If we limit ML-models for explainability, as for instance to Decisions Trees or Classification Rules; Europe will lose the Artificial Intelligence competition.
This lack of explainable systems is translated to an urgent demand of the European Union to research on explainability systems: High Prediction accuracy without sacrifice Explainability.
Figure 2 : Prediction accuracy versus ExplainabilityExplainable Machine Learning is a very broad topic and not exist formal definition yet.
A quick look at Google Scholar is enough to see different definitions of Explainability.
· Definition 1, Science of comprehending what a model did, or might have done [Leilani H.
et altrum 2019]· Definition 2, Ability to explain or to present understandable terms to a human [Finale Doshi-Velez and Been Kim 2017]· Definition 3, Use of machine-learning models for the extraction of relevant knowledge about domain relationships contained in data [W.
James Murdoch et altrum 2019]Figure 3: Explainability Machine Learning as a Post Hoc AnalysisEven the little nuances of the definitions is easy to understand what is Explainable Machine Learning (at least from a Theoretical point of view).
In recent years appears algorithms like Lime and SHAP that tries to provide explanations as a Post Hoc Analysis (Figure 3).
These two tools are a great starting point to start working with Explainability but not enough to accomplish the GPDR Framework.
Moreover, it does not exist a formal output of an explanation.
Neither a metric to compare explanations being impossible to affirm that Explanation A is better than B.
Still remains a long road until a general framework for explainability.
Some insights that explainability frameworks need to accomplish are the following [Finale Doshi-Velez and Been Kim 2017]:- Fairness: protected groups are not somehow discriminated against (explicit or implicit).
– Privacy: the method protects sensitive information, each prediction is independent of other observations- Reliability and robustness: the explanations must evolve according to input variation.
– Causality: implies that the predicted change in output due to a perturbation will occur in the real system.
– Usable and trusted: the framework assists the user to accomplish a task.
In the other task trusted refers to have the confidence of the user.
ReferencesTowards A Rigorous Science of Interpretable Machine Learning (Finale Doshi-Velez and Been Kim, 2017)Interpretable machine learning: definitions, methods, and applications (W.
James Murdoch et altrum, 2019)Explaining Explanations: An Overview of Interpretability of Machine Learning (Leilani H.
Gilpin et altrum, 2019)A Unified Approach to Interpreting Model Predictions (Scott M.
Lundberg and Su-In Lee, 2017).