Author Archives: Thibault Laugel

[TRAIL Seminar] Politique de l’IA | Séminaire de Bilel Benbouzid (in French)

We are pleased to invite you to the 7th seminar of the Trustworthy and Responsible AI Lab, an AXA-Sorbonne University joint research lab.
Exceptionally, this presentation will be in French.

Politique de l’IA

Bilel Benbouzid
Maître de Conférences à l’Université Gustave Eiffel

Jeudi 12 octobre 2023, 17:00 – 18:00 CET
Sorbonne University – Faculté des Sciences
Campus Pierre et Marie Curie

4 place Jussieu, 75005 Paris
Tower 24 → 1st floor → Corridor 25-26 → Room #105 (at LIP6 lab)
See Campus Map

Inscription obligatoire.


Politique de l’IA

En enquêtant sur les débats autour des enjeux de la régulation de l’IA, nous avons observé que les problèmes définitionnels étaient au cœur de conflits normatifs sur les moyens d’assujettir l’IA à un « contrôle social », qu’il soit technique, éthique, juridique ou politique. En prenant comme fil rouge de l’analyse les significations variées de l’IA, cet article vise à participer à la compréhension des tensions normatives sur son contrôle. Nous proposons une cartographie des lieux, des acteurs et des approches qui donne à voir comment les débats autour du contrôle de l’IA se structurent selon quatre arènes normatives différenciées, soit : les spéculations transhumanistes sur les dangers d’une superintelligence et le problème du contrôle de son alignement aux valeurs humaines ; l’auto-responsabilisation des chercheurs développant une science entièrement consacrée à la certification technique des machines ; les dénonciations des effets néfastes des systèmes d’IA sur les droits fondamentaux et le contrôle des rééquilibrages des pouvoirs ; enfin, la régulation européenne du marché par le contrôle de la sécurité du fait des produits et service de l’IA. De cette carte, nous proposons une analyse “politique” des enjeux de la régulation afin de livrer une boussole pour aider à se repérer dans les débats actuels.

Bilel Benbouzid est Maître de conférences à l’Université Gustave Eiffel. Il est chercheur au LISIS, un laboratoire de recherche interdisciplinaire consacré à l’étude des sciences et des innovations en société. Ses recherches s’inscrivent en sociologie des sciences et des techniques, un domaine qui étudie la dimension sociale et politique des technosciences. Après avoir travaillé sur la police prédictive, il oriente actuellement ses recherches vers une sociologie du politique de l’Intelligence Artificielle. Sur ce point, il a notamment publié dans Réseaux avec Yannick Meneceur et Nathalie Alisa Smuha un article « Quatre nuances de régulation de l’intelligence artificielle. Une cartographie des conflits de définition » et dans la même revue, avec Dominique Cardon, l’article introductif « Contrôler les IA ». Dans ce cette conférence, Bilel Benbouzid viendra nous parler de sa recherche en présentant un travail d’écriture en cours. Il prépare actuellement un ouvrage intitulé provisoirement “A la recherche du politique de l’intelligence artificielle: fiction, science, morale et droit”.

[PhD Defense] Clara Bove: Designing and evaluating explanation user interfaces for complex machine learning systems

 Profile photo of Clara Bove-Ziemann 

We are pleased to announce that Clara Bove, TRAIL researcher, will be defending her PhD thesis entitled:

Designing and evaluating explanation user interfaces for complex machine learning systems
Clara Bove, PhD student at Sorbonne Université and XAI Researcher at AXA 

Friday September 15th 2023, 14:00  CET


On-site only

Sorbonne University – Campus Pierre and Marie Curie
Room 105 Corridor 25-26, Tower 55 1st floor
Campus Pierre et Marie Curie, Sorbonne Université
4 place Jussieu, 75005 Paris

Map: https://sciences.sorbonne-universite.fr/vie-de-campus-sciences/accueil-vie-pratique/plan-du-campus


Designing and evaluating explanation user interfaces for complex machine learning systems

The thesis will be presented in front of the following jury members:

  • Katrien VERBERT, KU Leuven University
  • Nicolas LABROCHE, Université de Tours
  • Jean-Gabriel GANASCIA, Sorbonne Université
  • Theodoros EVGENIOU, INSEAD
  • Marie-Jeanne LESOT, Sorbonne Université (supervisor)
  • Charle TIJUS, Université Paris 8 (supervisor)
  • Marcin DETYNIECKI, AXA & Sorbonne Université/CNRS (supervisor)

Abstract:

This thesis is set in the field of human-centered eXplicable AI (XAI), and more specifically on the intelligibility of explanations for non-expert users. The technical context is as follows: on the one hand, an opaque classifier or regressor provides a prediction, and a post-hoc XAI approach generates information that acts as explanations; on the other hand, the user receives both the prediction and these explanations. In this context, several issues can limit the quality of explanations. Those we focus on are: the lack of contextual information in explanations, the lack of guidance for designing features to allow the user to explore the explanatory interface and the potential confusion that can be generated by the amount of information.
We develop an experimental procedure for designing explanatory user interfaces (XUIs) and evaluating their intelligibility with non-expert users. We propose several generic XAI principles and their implementation in different XUIs for explanations expressed as attribute importance scores as well as counterfactual examples. We evaluate the objective understanding and satisfaction of these interfaces through user studies. At a fundamental level, we consider the theoretical question of possible inconsistencies in these explanations. We study these inconsistencies and propose a typology to structure the most common ones in the literature.

About TRAIL, the joint research lab

The Trustworthy and Responsible Artificial Intelligence Lab (TRAIL) is a Sorbonne University and AXA joint research lab. Its aim is to study and address challenges linked to a generalized use of AI systems, in particular the explainability, fairness, robustness, governance and regulation of AI. The lab is composed of researchers from Sorbonne University and AXA. It is based in the Pierre & Marie Curie Campus at the Faculté des Sciences de Sorbonne University, inside the LIP6, a Computer Science lab from Sorbonne University and the CNRS (French National Center for Scientific Research) in Paris.

[TRAIL Seminar] Generation of realistic and robust counterfactual explanations | Talk from Victor Guyomard (in French)

We are pleased to invite you to the 5th seminar of the Trustworthy and Responsible AI Lab, an AXA-Sorbonne University joint research lab. This talk will be in French.

Generation of realistic and robust counterfactual explanations
Victor Guyomard, PhD Student at Orange Labs and Université de Rennes 

Thursday June 22nd 2023, 14:00  CET


On-site
On registration, within the limit of available seats

Sorbonne University – Campus Pierre and Marie Curie
Room 211 Corridor 55-65, Tour 55 2nd floor
Campus Pierre et Marie Curie, Sorbonne Université
4 place Jussieu, 75005 Paris

Map: https://sciences.sorbonne-universite.fr/vie-de-campus-sciences/accueil-vie-pratique/plan-du-campus

Or Online
Zoom link: provided the day before the talk on the website: https://lfi.lip6.fr/seminaires/


Generation of realistic and robust counterfactual explanations

The aim of this talk is to explain individual decisions made by AI with a focus on counterfactual explanations. In this presentation, two contributions will be presented:

  1. The development of VCnet, a self-explanatory model that combines a predictor and a counterfactual generator that are learned simultaneously. The architecture is based on a variational autoencoder, conditioned on the output of the predictor to generate realistic counterfactuals (close to the distribution of the target class). VCnet is able to generate predictions as well as counterfactual explanations without having to solve another minimization problem.
  2. The proposal of a new formalism, CROCO, for generating counterfactual explanations robust to variation in the counterfactual’s inputs. This kind of robustness involves finding a compromise between counterfactual robustness and proximity to the example to be explained. CROCO generates robust counterfactuals while effectively managing this trade-off and guaranteeing the user minimal robustness. Empirical evaluations on tabular datasets confirm the relevance and effectiveness of the proposed approach.

This seminar is co-organized with the IEEE Computational Intelligence Society

About TRAIL, the joint research lab

The Trustworthy and Responsible Artificial Intelligence Lab (TRAIL) is a Sorbonne University and AXA joint research lab. Its aim is to study and address challenges linked to a generalized use of AI systems, in particular the explainability, fairness, robustness, governance and regulation of AI. The lab is composed of researchers from Sorbonne University and AXA. It is based in the Pierre & Marie Curie Campus at the Faculté des Sciences de Sorbonne University, inside the LIP6, a Computer Science lab from Sorbonne University and the CNRS (French National Center for Scientific Research) in Paris.