Explainability plays an important role across the full lifecycle of an AI system, from inception, development and validation to production and monitoring. Explainability helps developers correct failure modes and optimise performance. It is often a regulatory requirement and can be crucial in making operational use of AI systems when there’s a human in the loop.

The Shapley framework for explainability is gaining widespread use by practitioners due to its general applicability and principled foundations. This method explains the output of an AI model in terms of its input features, placing all input features on equal footing. While this can be sensible when one has only a rough understanding of the data, this basic framework is inadequate for producing causal explanations when one has more precise knowledge of the data-generating process.

Faculty’s R&D team has introduced a novel explainability technique to address this problem: Asymmetric Shapley values (ASVs). The team presented their new approach to causal explainability at NeurIPS in December.

In this technical webinar, Faculty’s Research Scientist Christopher Frye will discuss the theoretical foundations of Asymmetric Shapley values and their practical applications, including:

  • How to incorporate causal structure underlying data into an explanation of its model.
  • How this method can be used in applications of fairness, time series, and feature selection.
  • How this technique is used in the real world.

No prior knowledge of explainability will be assumed.

date & time

Thursday 11 February 2021
11.00-12.00 GMT

location

Online

This event has finished