[TRAIL Seminar] Explainability & Fairness in practice: the Scikit-Learn ecosystem

We are pleased to invite you to the 2nd seminar of the Trustworthy and Responsible AI Lab, an AXA-Sorbonne University joint research lab.

Wednesday May 11th 2022, 17:00 – 19:00 CET


On-site
On registration, within the limit of available seats

Sorbonne University – Faculté des Sciences – Pierre & Marie Curie Campus (Jussieu)

4 place Jussieu, 75005 Paris

LIP6 – Jacques Pitrat Room (Corridor 25-26/ Room 105)

Or Online
MS Teams link upon registration

> Registration <


Agenda:

  • Inspect and (try) to interpret your Scikit-Learn machine learning models
    Guillaume Lemaître is a Research Engineer at INRIA, Scikit-Learn core developer. Guillaume holds a PhD in Computer Science.

    This presentation will be divided into three parts to provide an overview of the usage of the different solutions offered in Scikit-Learn and the ecosystem to interpret (Scikit-Learn) machine learning models. First, we will focus on the family of linear models and present the common pitfalls when interpreting the coefficients of such models. In addition to being used as predictive models, linear models are also used by interpretability methods such as LIME or KernelSHAP. Then, we will look at a broader set of models (e.g. gradient boosting) to discover and apply available inspection techniques implement in Scikit-Learn to inspect such models. Finally, we will make a tour of other tools available to interpret models, not currently available in Scikit-Learn, but widely used in practice.

  • Measurement and Fairness: Questions and Practices to Make Algorithmic Decision Making more Fair
    Adrin Jalali is a ML Engineer at Hugging Face and he is a Scikit-learn and Fairlearn core developer. He is an organizer of PyData Berlin. Adrin hold a PhD in Bioinformatics.

    Machine learning is almost always used in systems which automate or semi-automate decision making processes. These decisions are used in recommender systems, fraud detection, healthcare recommendation systems, etc. Many systems, if not most, can induce harm by giving a less desirable outcome for cases where they should in fact give a more desired outcome, e.g. reporting an insurance claim to be fraud when indeed it is not.In this talk we first go through different sources of harm which can creep into a system based on machine learning (historical harm, representation bias, measurement bias, aggregation bias, learning bias, evaluation bias, and deployment bias), and the types of harm an ML based system  can induce (allocation harm and quality of service harm).Taking lessons from social sciences, one can see input and output values of automated systems as measurements of constructs or a proxy measurement of those constructs. In this talk we go through a set of questions one should ask before and while working on such systems. Some of these questions can be answered quantitatively, and others qualitatively. Academics in social sciences use a different jargon than data scientists and computer scientists implementing automated systems. To bridge the gap, in this talk we explore concepts such as measurement, construct, construct validity, and construct reliability. We then go through concepts such as face validity, content validity, convergent validity, discriminant validity, predictive validity, hypothesis validity, and computational validity. By the end of this talk, you would be able to apply these lessons from social sciences in your daily data science projects to see if you should intervene at any stage of your product’s life cycle to make it more fair.