[PhD Defense] Clara Bove: Designing and evaluating explanation user interfaces for complex machine learning systems

 Profile photo of Clara Bove-Ziemann 

We are pleased to announce that Clara Bove, TRAIL researcher, will be defending her PhD thesis entitled:

Designing and evaluating explanation user interfaces for complex machine learning systems
Clara Bove, PhD student at Sorbonne Université and XAI Researcher at AXA 

Friday September 15th 2023, 14:00  CET


On-site only

Sorbonne University – Campus Pierre and Marie Curie
Room 105 Corridor 25-26, Tower 55 1st floor
Campus Pierre et Marie Curie, Sorbonne Université
4 place Jussieu, 75005 Paris

Map: https://sciences.sorbonne-universite.fr/vie-de-campus-sciences/accueil-vie-pratique/plan-du-campus


Designing and evaluating explanation user interfaces for complex machine learning systems

The thesis will be presented in front of the following jury members:

  • Katrien VERBERT, KU Leuven University
  • Nicolas LABROCHE, Université de Tours
  • Jean-Gabriel GANASCIA, Sorbonne Université
  • Theodoros EVGENIOU, INSEAD
  • Marie-Jeanne LESOT, Sorbonne Université (supervisor)
  • Charle TIJUS, Université Paris 8 (supervisor)
  • Marcin DETYNIECKI, AXA & Sorbonne Université/CNRS (supervisor)

Abstract:

This thesis is set in the field of human-centered eXplicable AI (XAI), and more specifically on the intelligibility of explanations for non-expert users. The technical context is as follows: on the one hand, an opaque classifier or regressor provides a prediction, and a post-hoc XAI approach generates information that acts as explanations; on the other hand, the user receives both the prediction and these explanations. In this context, several issues can limit the quality of explanations. Those we focus on are: the lack of contextual information in explanations, the lack of guidance for designing features to allow the user to explore the explanatory interface and the potential confusion that can be generated by the amount of information.
We develop an experimental procedure for designing explanatory user interfaces (XUIs) and evaluating their intelligibility with non-expert users. We propose several generic XAI principles and their implementation in different XUIs for explanations expressed as attribute importance scores as well as counterfactual examples. We evaluate the objective understanding and satisfaction of these interfaces through user studies. At a fundamental level, we consider the theoretical question of possible inconsistencies in these explanations. We study these inconsistencies and propose a typology to structure the most common ones in the literature.

About TRAIL, the joint research lab

The Trustworthy and Responsible Artificial Intelligence Lab (TRAIL) is a Sorbonne University and AXA joint research lab. Its aim is to study and address challenges linked to a generalized use of AI systems, in particular the explainability, fairness, robustness, governance and regulation of AI. The lab is composed of researchers from Sorbonne University and AXA. It is based in the Pierre & Marie Curie Campus at the Faculté des Sciences de Sorbonne University, inside the LIP6, a Computer Science lab from Sorbonne University and the CNRS (French National Center for Scientific Research) in Paris.