Arrow DownArrow ForwardChevron DownDownload facebookGroup 2 Copy 4Created with Sketch. linkedinCombined-Shape mailGroup 4Created with Sketch. ShapeCreated with Sketch. twitteryoutube

AI Interest Group Talks Series – Explainability for Fair Machine Learning

Tom Begley, Faculty’s R&D Lead will be touching on fairness in machine learning at the National Physics Laboratory’s new event series for Artificial Intelligence

Ensuring fairness in machine learning models is hard, in no small part because even simply determining what “unfairness” should mean in a given context is non-trivial: there are many competing definitions, and choosing between them often requires a deep understanding of the underlying task. It would be nice to use model explainability to better understand the reasons a model is making unfair predictions, but existing explainability tools do not reliably indicate whether a model is indeed fair.

Tom will recap some of the standard approaches to quantifying fairness as well as the Shapley value paradigm for model explainability. NPL will delve into how Shapley values can be used to attribute unfairness in a model to the individual features being used by the model, and that this setup motivates a new meta-algorithm for imposing fairness constraints on an existing model.

This event has finished

date & time

Wed, 20 April 2022
13:00 – 14:00 BST



To find out more about what Faculty can do
for you and your organisation, get in touch.