Inauguration of the Trustworthy and Responsible AI Lab | Talks from Prof. Joanna Bryson & Prof. David Martens

We are pleased to invite you to the inauguration of the Trustworthy and Responsible AI Lab (TRAIL – https://trail.lip6.fr), a joint research lab between AXA and Sorbonne University.

Wednesday March 23rd 2022, 17:00 – 19:00 CET

Sorbonne University – Faculté des Sciences – Pierre & Marie Curie Campus

4 place Jussieu, 75005 Paris

Esclangon building – Amphi Astier

[Monday, March 21st,16:00 CET] Due to the high number of registrations we have received and the limited capacity of the Amphitheater,  it is not possible to accept new registration to attend the meeting on site from now, if you didn’t register before Monday, March 21st,16:00 CET, you can only join the meeting online.


To attend the online event: Click here to join the meeting  (MS Teams link – starts March 23rd 2022, 17:00 CET)

 


Please find the agenda below:

  • Opening words – Inauguration of the joint lab

Clémence Magnien, Co-Head of the LIP6 (CS Lab at Sorbonne University)

Roland Scharrer, AXA Group Chief Emerging Technology & Data Officer

Marcin Detyniecki, AXA Group Chief Data Scientist & Head of R&D

Christophe Marsala, Head of the LFI team at LIP6

  • Artificial Intelligence is Necessarily Irresponsible

Joanna Bryson, Professor of Ethics and Technology at the Hertie School of Berlin, Germany

Whether you think of Artificial Intelligence (AI) as an academic discipline or a software engineering toolset, it has been growing fairly steadily since its inception in the 1950s. Yet the hype surrounding how AI is conceived is cyclic. In this talk I will present a theory of moral and ethical agency, which explains why anything that is an artefact cannot hold responsibility, only communicate it between humans and human organisations. I will then explain the complete plausibility of the systems engineering of AI for transparency and accountability, and discuss forthcoming European legislation, the AI Regulation (or AI Act), and how it relates to these capacities. Finally, I will briefly discuss what the EU’s place in the AI “arms race” tells us about the necessary egalitarian foundations of responsibility and justice both.

Joanna Bryson is Professor of Ethics and Technology at the Hertie School. Her research focuses on the impact of technology on human cooperation, and AI/ICT governance. From 2002-19 she was on the Computer Science faculty at the University of Bath. She has also been affiliated with the Department of Psychology at Harvard University, the Department of Anthropology at the University of Oxford, the School of Social Sciences at the University of Mannheim and the Princeton Center for Information Technology Policy. During her PhD she observed the confusion generated by anthropomorphised AI, leading to her first AI ethics publication “Just Another Artifact” in 1998. In 2010, she co-authored the first national-level AI ethics policy, the UK’s Principles of Robotics. She holds degrees in psychology and artificial intelligence from the University of Chicago (BA), the University of Edinburgh (MSc and MPhil), and Massachusetts Institute of Technology (PhD). Since July 2020, Prof. Bryson has been one of nine experts nominated by Germany to the Global Partnership for Artificial Intelligence.

https://www.joannajbryson.org/

  • The Counterfactual Explanation: Yet More Algorithms as a Solution to Explain Complex Models?

David Martens, Professor of Data Science at the University of Antwerp, Belgium

The inability of many “black box” prediction models to explain the decisions made, have been widely acknowledged. Interestingly, the solution turns out to be the introduction of yet more AI algorithms that explain the decisions made by complex AI models. Explaining the predictions of such models has become an important ethical component and has gained increased attention of the AI research community and even legislator, resulting in a new field termed “Explainable AI” (XAI). Counterfactual (CF) explanations has emerged as an important paradigm in this field, and provides evidence on how a (1) prediction model came to a (2) decision for a (3) specific data instance. In this talk, I’ll first provide an introduction to the counterfactual explanation and compare it to other popular XAI approaches. Next, some counterfactual generating techniques are discussed for tabular, textual, behavioral and image data, accompanied with some example applications, demonstrating the value in a range of areas.

David Martens is Professor of Data Science at the University of Antwerp, Belgium. He teaches data mining and data science and ethics to postgraduate students studying business economics and business engineering. David’s work with Foster Provost on instance-based explanations is regarded as one of the first to have introduced counterfactual explanations to the AI domain. His research has been published in high-impact journals and has received several awards. David is also the author of the book “Data Science Ethics: Concepts, Techniques and Cautionary Tales”, published by Oxford University Press.

https://www.uantwerpen.be/en/staff/david-martens/

  • The seminar will be followed by a cocktail

 

Info & Contacts

Onsite attendance is free with mandatory registration before March 22nd by email

Contacttrail@listes.lip6.fr

Online event (MS Teams link), starting March 23rd at 17:00 CET: Click here to join the meeting

About TRAIL, the joint research lab

The Trustworthy and Responsible Artificial Intelligence Lab (TRAIL) is a Sorbonne University and AXA joint research lab. Its aim is to study and address challenges linked to a generalized use of AI systems, in particular the explainability, fairness, robustness, governance and regulation of AI. The lab is composed of researchers from Sorbonne University and AXA. It is based in the Pierre & Marie Curie Campus at the Faculté des Sciences de Sorbonne University, inside the LIP6, a Computer Science lab from Sorbonne University and the CNRS (French National Center for Scientific Research) in Paris.