Certified Logic-Based Explainable AI - Assistance à la Certification d’Applications DIstribuées et Embarquées Access content directly
Preprints, Working Papers, ... (Working Paper) Year : 2023

Certified Logic-Based Explainable AI

Abstract

The continued advances in artificial intelligence (AI), including those in machine learning (ML), raise concerns regarding their deployment in high-risk and safety-critical domains. Motivated by these concerns, there have been calls for the verification of systems of AI, including their explanation. Nevertheless, tools for the verification of systems of AI are complex, and so error-prone. This paper describes one initial effort towards the certification of logic-based explainability algorithms, focusing on monotonic classifiers. Concretely, the paper starts by using the proof assistant Coq to prove the correctness of recently proposed algorithms for explaining monotonic classifiers. Then, the paper proves that the algorithms devised for monotonic classifiers can be applied to the larger family of stable classifiers. Finally, confidence code, extracted from the proofs of correctness, is used for computing explanations that are guaranteed to be correct. The experimental results included in the paper show the scalability of the proposed approach for certifying explanations.
Fichier principal
Vignette du fichier
paper.pdf (713.87 Ko) Télécharger le fichier
Origin : Files produced by the author(s)
licence : CC BY NC SA - Attribution - NonCommercial - ShareAlike

Dates and versions

hal-04031193 , version 1 (15-03-2023)
hal-04031193 , version 2 (27-03-2023)
hal-04031193 , version 3 (05-12-2023)

Licence

Attribution - NonCommercial - ShareAlike

Identifiers

  • HAL Id : hal-04031193 , version 1

Cite

Aurélie Hurault, Joao Marques-Silva. Certified Logic-Based Explainable AI: The Case of Monotonic Classifiers. 2023. ⟨hal-04031193v1⟩

Collections

IRIT-INPT
322 View
243 Download

Share

Gmail Facebook X LinkedIn More