A comparative study of additive local explanation methods based on feature influences - IRIT - Université Toulouse 1 Capitole Accéder directement au contenu
Communication Dans Un Congrès Année : 2022

A comparative study of additive local explanation methods based on feature influences

Résumé

Local additive explanation methods are increasingly used to understand the predictions of complex Machine Learning (ML) models. The most used additive methods, SHAP and LIME, suffer from limitations that are rarely measured in the literature. This paper aims to measure these limitations on a wide range (304) of OpenML datasets, and also evaluate emergent coalitional-based methods to tackle the weaknesses of other methods. We illustrate and validate results on a specific medical dataset, SA-Heart. Our findings reveal that LIME and SHAP's approximations are particularly efficient in high dimension and generate intelligible global explanations, but they suffer from a lack of precision regarding local explanations. Coalitional-based methods are computationally expensive in high dimension, but offer higher quality local explanations. Finally, we present a roadmap summarizing our work by pointing out the most appropriate method depending on dataset dimensionality and user's objectives.
Fichier principal
Vignette du fichier
paper4.pdf (3.69 Mo) Télécharger le fichier
Origine : Fichiers produits par l'(les) auteur(s)

Dates et versions

hal-03687554 , version 1 (03-06-2022)

Licence

Paternité

Identifiants

  • HAL Id : hal-03687554 , version 1

Citer

Emmanuel Doumard, Julien Aligon, Elodie Escriva, Jean-Baptiste Excoffier, Paul Monsarrat, et al.. A comparative study of additive local explanation methods based on feature influences. 24th International Workshop on Design, Optimization, Languages and Analytical Processing of Big Data ((DOLAP 2022), Mar 2022, Edinburgh, United Kingdom. pp.31-40. ⟨hal-03687554⟩
261 Consultations
117 Téléchargements

Partager

Gmail Facebook X LinkedIn More