Arrow DownArrow ForwardChevron DownDownload facebookGroup 2 Copy 4Created with Sketch. linkedinCombined-Shape Combined-Shape mailGroup 4Created with Sketch. ShapeCreated with Sketch. twitteryoutube

Tech talk: Explainable Anomaly Detection

Spotting irregularities in data plays a crucial role in processes that protect organisations from harm, such as identifying financial crime and even AI safety. But anomaly detection – the unsupervised search for events that break trends in an organisation’s data – can be very challenging, often resulting in an abundance of false positives. Explaining why a model has called an event anomalous is thus critical for triaging false positives and confirming truly anomalous events.

In this webinar, Faculty’s Research Scientist Christopher Frye will discuss the techniques that we believe are most effective for detecting and explaining anomalies in data, including:

  • How modern probabilistic modelling, in particular variational autoencoders, can be used to detect anomalies with state-of-the-art precision.
  • Why most AI explainability techniques fail to correctly explain anomalous data.
  • Why Faculty’s approach to explainability behaves correctly on irregular data.

No prior knowledge of anomaly detection or explainability will be assumed.

This event has finished

date & time

Thursday 05 Nov 2020
11.00 - 12.00 GMT

location

Online

To find out more about what Faculty can do
for you and your organisation, get in touch.