Research

The development of artificial intelligence and machine learning techniques leads to a significant increase in the complexity of automated decision systems (i.e. models) in companies and organizations that use them. The lack of transparency of such systems can lead to several issues, such as the lack of control and understanding of their decision-making process, the possibilities of recourse for those affected by the decisions; the reproduction and amplification of biases and inequities present in society; and the understanding and control of situations where the systems are not reliable or robust. These outcomes are a legitimate concern for the users of those systems: organizations themselves and their internal auditors, but also regulators, legislators and society as a whole.

TRAIL’s mission is to conduct research on these topics to develop the responsible and trusted use of artificial intelligence and in particular machine learning to address these concerns. Without being exhaustive, it explores the following topics:

 

Interpretability and explainability (XAI)

Modern machine learning techniques produce models. The operation of these models is opaque and their decision processes are incomprehensible to humans. The joint lab aims at enabling the use of state-of-the-art AI and machine learning techniques while fostering accountability and trustworthiness by enabling explanations that are faithful to the model and the decision process, but also relevant and contextually appropriate, and understandable by humans.

Our research focuses on several streams. Firstly, a better methodology for developing explainability (XAI) and interpretability, including both a definition of user needs and a clearer formalization of the proposed techniques to meet the information needs faithfully (ideally guaranteed or measured). Secondly, the development of explanation methods that meet formalized needs while respecting the external constraints of production (computation cost, privacy, etc.). Lastly, the development of explanations and interfaces that optimize the user’s understanding.

 

Fairness

One of the main risks in machine learning is that the fact that datasets on which the models are trained sometimes include biases that exist in our society. These biases usually bear negative outcomes for minorities based on criteria such as age, gender, ethnicity or religion. If nothing is done, there is no guarantee that the trained models will not reproduce these biases or even amplify them. The use of complex modelling techniques and opaque models increases this risk. Moreover, the means to eliminate it are complex. Our research focuses on understanding and measuring the biases present in the data and models. It also focuses on developing methods for debiasing the datasets and models either during their training or a posteriori, which will make it possible to respond to the external constraints of production.

 

Robustness, safety and reliability

The most efficient machine learning models have significant generalization capabilities from the dataset used for training, but their fidelity to reality is not guaranteed or measurable over the entire domain of use of the model. This phenomenon causes risks for the robustness and reliability of the models that are put into production.

Our research on this subject focus on developing means to reduce the problems of robustness and reliability during the training of models or a posteriori by anticipating, for example, the zones of the domain of use of a model which cause problems of robustness or reliability.

 

Regulation and governance of AI

In order to feed the aforementioned research and enlighten the implementation of AI in organizations, it is necessary to follow and contribute to the evolution of reflections and publications on the regulation and governance of AI. Our research focuses on how to set up AI governance systems in organizations, covering the above-mentioned aspects. It also tries to determine what are the technical and operational issues around this governance (levels of interpretability, robustness of systems, etc). Finally, it focuses on finding ways to contribute to the setting up of an equitable regulation between advanced technical/research challenges and societal constraints.