We are pleased to invite you to the 5th seminar of the Trustworthy and Responsible AI Lab, an AXA-Sorbonne University joint research lab. This talk will be in French.
Generation of realistic and robust counterfactual explanations
Victor Guyomard, PhD Student at Orange Labs and Université de Rennes
Thursday June 22nd 2023, 14:00 CET
On-site
On registration, within the limit of available seats
Sorbonne University – Campus Pierre and Marie Curie
Room 211 Corridor 55-65, Tour 55 2nd floor
Campus Pierre et Marie Curie, Sorbonne Université
4 place Jussieu, 75005 Paris
Map: https://sciences.sorbonne-universite.fr/vie-de-campus-sciences/accueil-vie-pratique/plan-du-campus
Or Online
Zoom link: provided the day before the talk on the website: https://lfi.lip6.fr/seminaires/
Generation of realistic and robust counterfactual explanations
The aim of this talk is to explain individual decisions made by AI with a focus on counterfactual explanations. In this presentation, two contributions will be presented:
- The development of VCnet, a self-explanatory model that combines a predictor and a counterfactual generator that are learned simultaneously. The architecture is based on a variational autoencoder, conditioned on the output of the predictor to generate realistic counterfactuals (close to the distribution of the target class). VCnet is able to generate predictions as well as counterfactual explanations without having to solve another minimization problem.
- The proposal of a new formalism, CROCO, for generating counterfactual explanations robust to variation in the counterfactual’s inputs. This kind of robustness involves finding a compromise between counterfactual robustness and proximity to the example to be explained. CROCO generates robust counterfactuals while effectively managing this trade-off and guaranteeing the user minimal robustness. Empirical evaluations on tabular datasets confirm the relevance and effectiveness of the proposed approach.
This seminar is co-organized with the IEEE Computational Intelligence Society
About TRAIL, the joint research lab
The Trustworthy and Responsible Artificial Intelligence Lab (TRAIL) is a Sorbonne University and AXA joint research lab. Its aim is to study and address challenges linked to a generalized use of AI systems, in particular the explainability, fairness, robustness, governance and regulation of AI. The lab is composed of researchers from Sorbonne University and AXA. It is based in the Pierre & Marie Curie Campus at the Faculté des Sciences de Sorbonne University, inside the LIP6, a Computer Science lab from Sorbonne University and the CNRS (French National Center for Scientific Research) in Paris.