Author Archives: Xavier Renard

[Open PhD Position] PhD Position on Trustworthy Large Language Models with Natural Language Instructions

In the frame of the TRAIL joint research lab between AXA and Sorbonne University, we are pleased to open a new PhD position on Trustworthy LLMs (Large Language Models).
The objective will be to delve into trustworthy AI alignment, exploring novel methodologies to bridge the gap between the capabilities of LLMs and ethical considerations, along with expert knowledge, essential for their responsible deployment.

The position is open at Sorbonne University TRAIL joint research lab.

[Open PhD Position] PhD Position on the Evaluation of Reasoning in Large Language Models

In the frame of the TRAIL joint research lab between AXA and Sorbonne University, we are pleased to open a new PhD position on the evaluation and assessment of reasoning in LLMs (Large Language Models) and Foundation Models.
Following the development of adequate assessments of the reasoning abilities, a study of LLMs reasoning abilities and limits will be performed. This research will also explore the link with interpretability, aiming to delve deeper into understanding LLMs reasoning, with contributions on the reliability and transparency of these models.

The position is open at Sorbonne University TRAIL joint research lab.

[TRAIL Seminar] PhD Panorama on Responsible AI: Three Talks on Regulation, Ethics, and Explainability | Talks from Mélanie Gornet, Milan Bhan & Thomas Souverain

We are pleased to invite you to the 8th seminar of the Trustworthy and Responsible AI Lab, an AXA-Sorbonne University joint research lab.
For this seminar, we are excited to introduce a new format where we invite doctoral students specializing in responsible AI to share their research.

PhD Panorama on Responsible AI:
Three Talks on Regulation, Ethics, and Explainability

Mélanie Gornet, PhD Student at Télécom Paris
Milan Bhan, PhD Student at Sorbonne University
Thomas Souverain, PhD Student at ENS Ulm

Thursday November 16th 2023, 16:00 – 18:00 CET
Sorbonne University – Faculté des Sciences
Campus Pierre et Marie Curie

4 place Jussieu, 75005 Paris
Tower 24 → 4th floor → Corridor 24-25 → Room #405 (at LIP6 lab)
See Campus Map


The European approach to regulating AI through technical standards, Mélanie Gornet

European institutions are currently finalizing the AI Act, a new regulation on artificial intelligence. The AI Act will require manufacturers of high-risk AI systems to affix a European Conformity (CE) mark on their products to show compliance with the AI Act and permit AI products to circulate freely in Europe. The issuance of a CE mark will depend on compliance with harmonized technical standards (HSs), developed by European standardization organizations. HSs and CE marking are long-established regulatory techniques in the EU to deal with product safety. To date, however, they have never been used to attest to compliance with fundamental rights, and yet part of the goal of the AI Act is to ensure that AI systems respect these rights.
What is the role of HSs and CE marking in the AI Act and how does it differ from other European legislation? What problems are posed by the use of HSs and CE marking to assess the protection of fundamental rights and can they be avoided?

Mélanie Gornet is a PhD student at Télécom Paris, Institut Polytechnique de Paris.
She focuses on regulation of artificial intelligence that encompasses social, legal and technical aspects. Her current projects focus on the standardisation and operational implementation of ethical criteria for AI systems, such as explainability and fairness. She is particularly interested in the compliance of AI systems in the context of the proposed European AI Act regulation.
She previously studied at SciencesPo Paris and ISAE-SUPAERO, specializing in AI and data science, and accompanied the working group on facial and behavioural recognition of the Comité National Pilote d’Ethique du Numérique (CNPEN).

Generating textual counterfactual explanations, Milan Bhan

Counterfactual examples explain a prediction by highlighting changes of instance that flip the outcome of a classifier. This work proposes TIGTEC, an efficient and modular method for generating sparse, plausible and diverse counterfactual explanations for textual data. TIGTEC is a text editing heuristic that targets and modifies words with high contribution using local feature importance. A new attention based local feature importance is proposed. Counterfactual candidates are generated and assessed with a cost function integrating semantic distance, while the solution space is efficiently explored in a beam search fashion. The conducted experiments show the relevance of TIGTEC in terms of success rate, sparsity, diversity and plausibility. This method can be used in both model-specific or model-agnostic way, which makes it very convenient for generating counterfactual explanations.
Even under sparsity constraint, counterfactual generation can lead to numerous changes from the initial text, making the explanation hard to understand. We propose Counterfactual Feature Importance, a method to make non-sparse counterfactual explanations more intelligible. Counterfactual Feature Importance assesses token change importance between an instance to explain and its counterfactual example. We develop two ways of computing Counterfactual Feature Importance, respectively based on classifier gradient computation and TIGTEC loss evolution during counterfactual search. Then we design a global version of Counterfactual Feature Importance, providing information about semantic fields globally impacting classifier predictions. Counterfactual Feature Importance enables to focus on impacting parts of counterfactual explanations, making counterfactual explanations involving numerous changes more understandable.

Milan Bhan is a PhD student at Sorbonne University, in the LIP6 lab, and senior consultant at Ekimetrics.
His work focuses mainly on interpretability and natural language processing. He has developed a method for generating sparse textual counterfactual examples.

From Explainability to Fairness in AI – Insights of a PhD in Philosophy, Thomas Souverain

Recent Machine Learning (ML) models are commonly considered as “black boxes”, mainly due to their ability to learn statistical patterns from data rather than human rules. Research in philosophy aims to take a step back on this new kind of Artificial Intelligence (AI), in the general sense of attempts to delegate intellective functions to automated computing systems. What is the impact of these opaque models on human understanding and ethical use?

We present here our PhD approach, which digs into the opacity of AI on both sides of fairness and explainability, applied to the cases of loan lending and job offers. The first axis investigates the convergence between the techniques introduced for fairer AI and humans’ ideas of justice. The second one highlights the gap between explainable AI (xAI) techniques and their relevance as perceived by users, putting forward epistemological criteria to complement xAI and shape a full explanation. As explainability and fairness only represent a part of AI Ethics, we finally broaden the pragmatic view on other principles, required to accommodate ML for humans.  

Thomas Souverain is currently finishing a Philosophy PhD on AI Ethics at ENS Ulm (2020-2024), on the topic: “Can we Explain AI? ​Technical Solutions and Ethical Issues in algorithmic Loan Lending and Job Offering”.
To that end, the PhD consists of a CIFRE partnership between the ENS Ulm (Institut Jean Nicod, Paris Sciences et Lettres University) and companies providing Thomas with programming skills and field expertise. Since 2020, Thomas studies AI ethical challenges on applied cases, namely loan lending (DreamQuark) and job offering (Pôle Emploi DSI).
On this rich material from data-science, in particular to correct AI models for fairness purposes or to shape them to be better understood, his advisor Paul Egré (ENS Ulm, IJN, PSL) helps him determine to what extent algorithmic fairness is tied with ideas of justice, and explainable AI techniques can be translated into human categories for understanding.
Beside his speciality on explainability and fairness, Thomas spends time to make AI ethics more familiar and implementable. He teaches students in computer science AI ethics (Paris 13 University master’s degree). During his research stays in Oxford and Stanford, he organized meetings between experts in the law, social sciences, and computer science to bridge the gaps between institutions and disciplines. He also leads since 2021 a working group of AI ethicists and experts (La Poste, Pôle Emploi, Gendarmerie…). In that group of Hub France IA, he develops and shares good practices to anticipate the trust requirements of the European AI Act.

[TRAIL Seminar] Large Language Models and Law | Talk from Prof. Harry Surden

We are pleased to invite you to the 6th seminar of the Trustworthy and Responsible AI Lab, an AXA-Sorbonne University joint research lab.

Large Language Models and Law

Harry Surden
Professor of Law at the University of Colorado Law School

Thursday June 29th 2023, 14:00 – 15:00 CET
Sorbonne University – Faculté des Sciences
Campus Pierre et Marie Curie

4 place Jussieu, 75005 Paris
Tower 24 → 4th floor → Corridor 24-25 → Room #405 (at LIP6 lab)
See Campus Map


Large Language Models and Law

Artificial Intelligence (AI) approaches using Large Language Model (LLM) technologies, such as GPT-4, have achieved remarkable gains in capabilities in the past year. Such models are capable of fluently producing, and in some cases, analyzing text. Given that most legal systems use language and text as the central communication device, a natural question arises:  To what extent might Large Language Models (LLM) play a role in the practice of law?

In this talk, I will introduce large language model technology, explain briefly how they work, and then explore their uses and their limits within law. I show some valuable use cases in terms of producing drafts of US language legal documents, such as contracts, patent applications, etc.  However, I also caution about their limits, given current capabilities, as well as some directions for improvement.

Harry Surden is a Professor of Law at the University of Colorado, and Associate Director at Stanford University’s CodeX Center for Legal Informatics.

Professor Surden is one of the leading scholars of artificial intelligence and law and is the creator of Computable Contracts.  His articles “Structural Rights in Privacy”, “Machine Learning and Law”, and “Artificial and Law: An Overview” have been widely cited.  Professor Surden has a background both in computer science and law.  Prior to entering academia, Professor Surden worked as a professional software engineer at Cisco Systems and Bloomberg L.P..  He is a graduate of Stanford University and Cornell University, both with honors.

https://www.harrysurden.com/

[TRAIL Seminar] Human-centered AI: How Can We Support End-Users to Interact with AI? | Talks from Katrien Verbert & Jeroen Ooge

We are pleased to invite you to the 4th seminar of the Trustworthy and Responsible AI Lab, an AXA-Sorbonne University joint research lab.

Human-Centered AI: How Can We Support End-Users to Interact with AI?
Katrien Verbert, Professor at KU Leuven

Visual Explanations for AI Decisions: Fostering Trust in AI through Transparency and Control
Jeroen Ooge, PhD Candidate at at KU Leuven

Friday April 7th 2023, 15:00 – 16:30 CET


On-site
On registration, within the limit of available seats

Sorbonne University – Faculté des Sciences
Salle de conférences, SCAI, bâtiment Esclangon, 1er étage
Campus Pierre et Marie Curie, Sorbonne Université
4 place Jussieu, 75005 Paris

Or Online
MS Teams link upon registration

> Registration <


Agenda:

Human-Centered AI: How Can We Support End-Users to Interact with AI?

Katrien Verbert, Professor at the Augment research group of the Computer Science Department of KU Leuven

Despite the long history of work on explanations in the Machine Learning, AI and Recommender Systems literature, current efforts face unprecedented difficulties: contemporary models are more complex and less interpretable than ever. As such models are used in many day-to-day applications, justifying their decisions for non-expert users with little or no technical knowledge will only become more crucial. Although several explanation methods have been proposed, little work has been done to evaluate whether the proposed methods indeed enhance human interpretability. Many existing methods also require significant expertise and are static. Several researchers have voiced the need for interaction with explanations as a core requirement to support understanding. In this talk, I will present our work on explanation methods that are tailored to the needs of non-expert users in AI. In addition, I will present the results of several user studies that investigate how such explanations interact with different personal characteristics, such as expertise, need for cognition and visual working memory.

Katrien Verbert is Professor at the Augment research group of the Computer Science Department of KU Leuven. She obtained a doctoral degree in Computer Science in 2008 at KU Leuven, Belgium. She was a postdoctoral researcher of the Research Foundation – Flanders (FWO) at KU Leuven. She was an Assistant Professor at TU Eindhoven, the Netherlands (2013 –2014) and  Vrije Universiteit Brussel, Belgium (2014 – 2015). Her research interests include visualisation techniques, recommender systems, explainable AI, and visual analytics. She has been involved in several European and Flemish projects on these topics, including the EU ROLE, STELLAR, STELA, ABLE, LALA, PERSFO, Smart Tags and BigDataGrapes projects. She is also involved in the organisation of several conferences and workshops (general co-chair IUI 2021, program chair LAK 2020, general chair EC-TEL 2017, program chair EC-TEL 2016, workshop chair EDM 2015, program chair LAK 2013 and program co-chair of the EdRecSys, VISLA and XLA workshop series, DC chair IUI 2017, DC chair LAK 2019).

 

Visual explanations for AI decisions: Fostering trust in AI through transparency and control

Jeroen Ooge, PhD Candidate in computer science at KU Leuven

Automated systems increasingly support decision-making with AI. While such automation often improves working efficiency, it also raises questions about the origin and validity of model outcomes. Explaining model outcomes is not trivial: AI models are black boxes to people unfamiliar with AI. A promising solution to realise explainable AI (XAI) is visualisation. Through interactive visualisations, people can better understand models’ behaviour and reasoning process, which helps them contextualise model outcomes. Important here is that different people and different contexts require different solutions. Thus, human-centred XAI methods are essential. In this talk, Jeroen will cover his XAI work on transparency and control, applied in healthcare and education. He will demonstrate some of the many visual interfaces he designed, and also present the user studies he conducted to study their impact on people’s behaviours, for example, their trust in AI decisions.

Jeroen Ooge holds two Masters of Science (fundamental mathematics and applied informatics) and now finalises a PhD in computer science at KU Leuven in Belgium. His research focuses on explainable AI. In particular, Jeroen investigates how visualisations can help people to better understand AI models, calibrate their trust in these models, and steer model outcomes with domain expertise. He has studied and designed numerous interactive visualisations tailored to a specific target audience and application context.

About TRAIL, the joint research lab

The Trustworthy and Responsible Artificial Intelligence Lab (TRAIL) is a Sorbonne University and AXA joint research lab. Its aim is to study and address challenges linked to a generalized use of AI systems, in particular the explainability, fairness, robustness, governance and regulation of AI. The lab is composed of researchers from Sorbonne University and AXA. It is based in the Pierre & Marie Curie Campus at the Faculté des Sciences de Sorbonne University, inside the LIP6, a Computer Science lab from Sorbonne University and the CNRS (French National Center for Scientific Research) in Paris.

[TRAIL Seminar] Marcin Detyniecki – When responsible AI research needs to meet reality

We are pleased to invite you to the 3rd seminar of the Trustworthy and Responsible AI Lab, an AXA-Sorbonne University joint research lab.

Monday January 23rd 2023, 17:00 – 19:00 CET


On-site
On registration, within the limit of available seats

Sorbonne University – Faculté des Sciences – Pierre & Marie Curie Campus (Jussieu)

4 place Jussieu, 75005 Paris

LIP6 – Jacques Pitrat Room
(Jussieu Campus – Tower 26 – 1st floor – Corridor 25-26 – Room 105)

Or Online
MS Teams link upon registration

> Registration <


Agenda:

When responsible AI research needs to meet reality

Marcin Detyniecki, Group Chief Data Scientist & Head of AI Research and Thought Leadership at insurance global leader AXA

Responsible AI topics are now firmly established in the research community, with explainable AI or AI fairness as flagships. On the other end of the spectrum, those topics are now seriously attracting attention in organizations and companies, especially through the lens of AI governance, a discipline that aims to analyze and address the challenges arising from a widespread use of AI in practice, as AI regulations are around the corner.
This leads companies to focus on the compliance aspects of AI projects. In practice, data scientists and organizations are in a fog, missing adequate guidance and solutions from research to achieve responsible AI.

In this talk, we will discuss how large companies, like AXA, currently see the responsible AI topic and why the current research output only provides partially actionable methodologies and solutions. We will discuss and illustrate with some concrete examples how the research community could better address the scientific challenges of this new applied responsible AI practice.

Marcin Detyniecki is Group Chief Data Scientist & Head of AI Research and Thought Leadership at insurance global leader AXA. He leverages his expertise to help AXA deliver value and overcome AI and ML-related business challenges and enable the group to achieve its transformation as a tech-led company. He leads the artificial intelligence R&D activity at group level. His team works on setting a framework enabling fair, safe and explainable ML to deliver value.

Marcin is also active in several think and do tanks, including a role of vice-president and board member of Impact AI, member of the Consultative Expert Group on Digital Ethics in Insurance for EIOPA and technical expert at Institut Montaigne. He has been involved in several academic roles including Research Scientist at both CNRS and Sorbonne University. He holds a Ph.D. in Computer Science from Université Pierre et Marie Curie.

About TRAIL, the joint research lab

The Trustworthy and Responsible Artificial Intelligence Lab (TRAIL) is a Sorbonne University and AXA joint research lab. Its aim is to study and address challenges linked to a generalized use of AI systems, in particular the explainability, fairness, robustness, governance and regulation of AI. The lab is composed of researchers from Sorbonne University and AXA. It is based in the Pierre & Marie Curie Campus at the Faculté des Sciences de Sorbonne University, inside the LIP6, a Computer Science lab from Sorbonne University and the CNRS (French National Center for Scientific Research) in Paris.

 

 

 

[Closed application] PhD Position in ML Explainability – Differential Explainability

In the frame of the TRAIL joint research lab between AXA and Sorbonne University, we are pleased to open a new PhD position in machine learning explainability.
The objective will be to develop the novel idea of “differential interpretability” and propose solutions to explain the differences between two models, in particular after re-training or mechanisms like transfer learning.

The candidate should have a master’s degree in Computer Science, AI/ML, applied mathematics or equivalent, experience in research or R&D and a strong motivation.

The position is open at Sorbonne University TRAIL joint research lab.

  • Title: Defining Differential Explanations: Understanding the Dynamic of Changes in Machine Learning Models
  • PhD Proposal (PDF): TRAIL_Differential_XAI_PhD_Proposal
  • To apply: the application is closed

[TRAIL Seminar] Explainability & Fairness in practice: the Scikit-Learn ecosystem

We are pleased to invite you to the 2nd seminar of the Trustworthy and Responsible AI Lab, an AXA-Sorbonne University joint research lab.

Wednesday May 11th 2022, 17:00 – 19:00 CET


On-site
On registration, within the limit of available seats

Sorbonne University – Faculté des Sciences – Pierre & Marie Curie Campus (Jussieu)

4 place Jussieu, 75005 Paris

LIP6 – Jacques Pitrat Room (Corridor 25-26/ Room 105)

Or Online
MS Teams link upon registration

> Registration <


Agenda:

  • Inspect and (try) to interpret your Scikit-Learn machine learning models
    Guillaume Lemaître is a Research Engineer at INRIA, Scikit-Learn core developer. Guillaume holds a PhD in Computer Science.

    This presentation will be divided into three parts to provide an overview of the usage of the different solutions offered in Scikit-Learn and the ecosystem to interpret (Scikit-Learn) machine learning models. First, we will focus on the family of linear models and present the common pitfalls when interpreting the coefficients of such models. In addition to being used as predictive models, linear models are also used by interpretability methods such as LIME or KernelSHAP. Then, we will look at a broader set of models (e.g. gradient boosting) to discover and apply available inspection techniques implement in Scikit-Learn to inspect such models. Finally, we will make a tour of other tools available to interpret models, not currently available in Scikit-Learn, but widely used in practice.

  • Measurement and Fairness: Questions and Practices to Make Algorithmic Decision Making more Fair
    Adrin Jalali is a ML Engineer at Hugging Face and he is a Scikit-learn and Fairlearn core developer. He is an organizer of PyData Berlin. Adrin hold a PhD in Bioinformatics.

    Machine learning is almost always used in systems which automate or semi-automate decision making processes. These decisions are used in recommender systems, fraud detection, healthcare recommendation systems, etc. Many systems, if not most, can induce harm by giving a less desirable outcome for cases where they should in fact give a more desired outcome, e.g. reporting an insurance claim to be fraud when indeed it is not.In this talk we first go through different sources of harm which can creep into a system based on machine learning (historical harm, representation bias, measurement bias, aggregation bias, learning bias, evaluation bias, and deployment bias), and the types of harm an ML based system  can induce (allocation harm and quality of service harm).Taking lessons from social sciences, one can see input and output values of automated systems as measurements of constructs or a proxy measurement of those constructs. In this talk we go through a set of questions one should ask before and while working on such systems. Some of these questions can be answered quantitatively, and others qualitatively. Academics in social sciences use a different jargon than data scientists and computer scientists implementing automated systems. To bridge the gap, in this talk we explore concepts such as measurement, construct, construct validity, and construct reliability. We then go through concepts such as face validity, content validity, convergent validity, discriminant validity, predictive validity, hypothesis validity, and computational validity. By the end of this talk, you would be able to apply these lessons from social sciences in your daily data science projects to see if you should intervene at any stage of your product’s life cycle to make it more fair.

 

 

 

Inauguration of the Trustworthy and Responsible AI Lab | Talks from Prof. Joanna Bryson & Prof. David Martens

We are pleased to invite you to the inauguration of the Trustworthy and Responsible AI Lab (TRAIL – https://trail.lip6.fr), a joint research lab between AXA and Sorbonne University.

Wednesday March 23rd 2022, 17:00 – 19:00 CET

Sorbonne University – Faculté des Sciences – Pierre & Marie Curie Campus

4 place Jussieu, 75005 Paris

Esclangon building – Amphi Astier

[Monday, March 21st,16:00 CET] Due to the high number of registrations we have received and the limited capacity of the Amphitheater,  it is not possible to accept new registration to attend the meeting on site from now, if you didn’t register before Monday, March 21st,16:00 CET, you can only join the meeting online.


To attend the online event: Click here to join the meeting  (MS Teams link – starts March 23rd 2022, 17:00 CET)

 


Please find the agenda below:

  • Opening words – Inauguration of the joint lab

Clémence Magnien, Co-Head of the LIP6 (CS Lab at Sorbonne University)

Roland Scharrer, AXA Group Chief Emerging Technology & Data Officer

Marcin Detyniecki, AXA Group Chief Data Scientist & Head of R&D

Christophe Marsala, Head of the LFI team at LIP6

  • Artificial Intelligence is Necessarily Irresponsible

Joanna Bryson, Professor of Ethics and Technology at the Hertie School of Berlin, Germany

Whether you think of Artificial Intelligence (AI) as an academic discipline or a software engineering toolset, it has been growing fairly steadily since its inception in the 1950s. Yet the hype surrounding how AI is conceived is cyclic. In this talk I will present a theory of moral and ethical agency, which explains why anything that is an artefact cannot hold responsibility, only communicate it between humans and human organisations. I will then explain the complete plausibility of the systems engineering of AI for transparency and accountability, and discuss forthcoming European legislation, the AI Regulation (or AI Act), and how it relates to these capacities. Finally, I will briefly discuss what the EU’s place in the AI “arms race” tells us about the necessary egalitarian foundations of responsibility and justice both.

Joanna Bryson is Professor of Ethics and Technology at the Hertie School. Her research focuses on the impact of technology on human cooperation, and AI/ICT governance. From 2002-19 she was on the Computer Science faculty at the University of Bath. She has also been affiliated with the Department of Psychology at Harvard University, the Department of Anthropology at the University of Oxford, the School of Social Sciences at the University of Mannheim and the Princeton Center for Information Technology Policy. During her PhD she observed the confusion generated by anthropomorphised AI, leading to her first AI ethics publication “Just Another Artifact” in 1998. In 2010, she co-authored the first national-level AI ethics policy, the UK’s Principles of Robotics. She holds degrees in psychology and artificial intelligence from the University of Chicago (BA), the University of Edinburgh (MSc and MPhil), and Massachusetts Institute of Technology (PhD). Since July 2020, Prof. Bryson has been one of nine experts nominated by Germany to the Global Partnership for Artificial Intelligence.

https://www.joannajbryson.org/

  • The Counterfactual Explanation: Yet More Algorithms as a Solution to Explain Complex Models?

David Martens, Professor of Data Science at the University of Antwerp, Belgium

The inability of many “black box” prediction models to explain the decisions made, have been widely acknowledged. Interestingly, the solution turns out to be the introduction of yet more AI algorithms that explain the decisions made by complex AI models. Explaining the predictions of such models has become an important ethical component and has gained increased attention of the AI research community and even legislator, resulting in a new field termed “Explainable AI” (XAI). Counterfactual (CF) explanations has emerged as an important paradigm in this field, and provides evidence on how a (1) prediction model came to a (2) decision for a (3) specific data instance. In this talk, I’ll first provide an introduction to the counterfactual explanation and compare it to other popular XAI approaches. Next, some counterfactual generating techniques are discussed for tabular, textual, behavioral and image data, accompanied with some example applications, demonstrating the value in a range of areas.

David Martens is Professor of Data Science at the University of Antwerp, Belgium. He teaches data mining and data science and ethics to postgraduate students studying business economics and business engineering. David’s work with Foster Provost on instance-based explanations is regarded as one of the first to have introduced counterfactual explanations to the AI domain. His research has been published in high-impact journals and has received several awards. David is also the author of the book “Data Science Ethics: Concepts, Techniques and Cautionary Tales”, published by Oxford University Press.

https://www.uantwerpen.be/en/staff/david-martens/

  • The seminar will be followed by a cocktail

 

Info & Contacts

Onsite attendance is free with mandatory registration before March 22nd by email

Contacttrail@listes.lip6.fr

Online event (MS Teams link), starting March 23rd at 17:00 CET: Click here to join the meeting

About TRAIL, the joint research lab

The Trustworthy and Responsible Artificial Intelligence Lab (TRAIL) is a Sorbonne University and AXA joint research lab. Its aim is to study and address challenges linked to a generalized use of AI systems, in particular the explainability, fairness, robustness, governance and regulation of AI. The lab is composed of researchers from Sorbonne University and AXA. It is based in the Pierre & Marie Curie Campus at the Faculté des Sciences de Sorbonne University, inside the LIP6, a Computer Science lab from Sorbonne University and the CNRS (French National Center for Scientific Research) in Paris.