Spotting irregularities in data plays a crucial role in processes that protect organisations from harm, such as identifying financial crime and even AI safety. But anomaly detection – the unsupervised search for events that break trends in an organisation’s data – can be very challenging, often resulting in an abundance of false positives. Explaining why a model has called an event anomalous is thus critical for triaging false positives and confirming truly anomalous events.
In this webinar, Faculty’s Research Scientist Christopher Frye will discuss the techniques that we believe are most effective for detecting and explaining anomalies in data, including:
- How modern probabilistic modelling, in particular variational autoencoders, can be used to detect anomalies with state-of-the-art precision.
- Why most AI explainability techniques fail to correctly explain anomalous data.
- Why Faculty’s approach to explainability behaves correctly on irregular data.
No prior knowledge of anomaly detection or explainability will be assumed.