A Sorbonne Université & AXA joint-research laboratory
The Trustworthy and Responsible Artificial Intelligence Lab (TRAIL) is a Sorbonne Université and AXA joint research lab. Its aim is to study and address challenges linked to a generalized use of AI systems, in particular for a responsible and trustworthy use of AI. The lab is based in Paris, in the Pierre & Marie Curie Campus of the Faculté des Sciences de Sorbonne Université, inside the LIP6, a Computer Science lab from Sorbonne Université and the CNRS (French National Center for Scientific Research). It includes researchers from the LIP6 Computer Science lab of Sorbonne Université and AI research scientists from AXA. The creation of the lab is the outcome of several years of research collaborations between Sorbonne Université and AXA on trustworthy and responsible AI.
The use of AI (Artificial Intelligence) and ML (Machine Learning) in particular, has proven its value in many applications, from healthcare or transport to the finance industry. The field is now moving towards a more mature practice where organizations, companies and society at large are becoming increasingly aware of the challenges and issues associated with a generalized use of AI systems.
Among the academic, legal and industrial ecosystems, the TRAIL joint research lab aims to provide reliable analysis, solutions and information to support a trustworthy and responsible deployment of AI in organizations while protecting individuals and society as a whole. Concretely, TRAIL aims at fostering joint research projects, theoretical or applied, with the objective to produce knowledge, methods, and tools to support the development of explainable, interpretable, fair and trustworthy AI in companies, organizations, and society at large. TRAIL has an open approach with the publication of the acquired knowledge, the promotion of this knowledge and the availability of the methods and tools developed. TRAIL is also committed to building relationships with the scientific and academic communities.
TRAIL research topics cover fairness assessment of AI systems and bias mitigation; explainability and interpretability of these systems (XAI); their robustness and safety, and the Human-AI systems interface (HCXAI). TRAIL also investigates AI governance in organizations.