X-SHAP: towards multiplicative explainability of Machine Learning
by
and
,
Yannick Léo
Yannick Léo
and
,
Aimé Lachapelle
Aimé Lachapelle
and
,
Luisa Bouneder
Luisa Bouneder
May 5, 2020
Download

This paper introduces X-SHAP, a model-agnostic method that assesses multiplicative contributions of variables for both local and global predictions.

This methodtheoretically and operationally extends the so-called additive SHAP approach. Itproves useful underlying multiplicative interactions of factors, typically arisingin sectors where Generalized Linear Models are traditionally used, such as ininsurance or biology.

We test the method on various datasets and propose a set oftechniques based on individual X-SHAP contributions to build aggregated multiplicative contributions and to capture multiplicative feature importance, that wecompare to traditional techniques.

At a glance

Interpretation of prediction model outputs can be as important as the prediction of machine learning models, e.g. insurance pricing, credit rejection or acceptance, recommendation to decision markers, medical diagnostic. The users need to understand the factors underlying the prediction. Model interpretability offers the possibility to better audit the robustness and fairness of predictive models. Simple models such as linear regressions or GLMs are quite accurate and easily interpretable.

On the contrary, the development of more complex models, such as machine learning ensemble models or deep learning models leads, to highly accurate but more complex models that are difficult to interpret. The trade-off between building a more accurate model vs. keeping a simple and interpretable model is not an easy choice.

In many cases, the simple interpretable model is still preferred. In order to solve the accuracy-interpretability trade-off, a large number of interpretable methods have been proposed. It is noteworthy that all these methods focus on additive contributions computation, none of them being able to tackle multiplicative contributions assessment. In this paper, we introduce, X-SHAP, a model-agnostic interpretability method that provides multiplicative contributions for individual predictions. Our main contributions are summarized as follows:

  1. We extend the additive analytical solution to the model-agnostic multiplicative interpretability problem,
  2. We introduce X-SHAP, an algorithm that provides approximate multiplicative contributions at individual levels,
  3. We propose the X-SHAP toolbox, a new set of techniques used to understand global and segmented model structure by aggregating multiple local contributions.
  4. We empirically verify desirable properties and compare the X-SHAP approach to both the additive algorithm Kernel SHAP, and to well-known metrics on various supervised problems.

Impact

X-SHAP offers a robust and model-agnostic methodology to assess multiplicative contributions. This unique method strengthens the set of techniques and tools contributing to making machine learning more transparent, auditable and accessible.

This method is expected to prove useful for multiplicative underlying structures of modeled phenomena, such as areas where modelers are used to apply log-GLMs (e.g. actuaries modeling claims, epidemiology spreading modeling, disease risk factors estimation, energy consumption forecasting). It is provided as a tool that can help these experts adopt machine learning models with appropriate interpretability framework that stick to their habits.

Authors
Yannick Léo
Partner & Data Science Director
Aimé Lachapelle
Managing Partner
Luisa Bouneder
Co-Author
Related articles
AI and the future of Human Resources
by
and
,
Aimé Lachapelle
March 2024

AI is projected to enhance HR productivity by 30 to 40%, catalyzing its strategic evolution within the company's framework.