Arrow DownArrow ForwardChevron DownDownload facebookGroup 2 Copy 4Created with Sketch. linkedinCombined-Shape mailGroup 4Created with Sketch. ShapeCreated with Sketch. twitteryoutube

AI can tackle real business challenges - but it can raise problems, too.

AI systems deployed without sufficient safety and ethical considerations cause problems: bias and discrimination, invasions of privacy, and unexplainable outcomes. These threats jeopardise business performance and reputation.

We’ve been trusted to tackle some of society’s biggest challenges with AI – from helping police crackdown on organised crime to supporting the NHS during the COVID-19 pandemic. Where the wrong decision impacts society, AI ethics and safety is essential.

Every AI system we deploy has been designed and built to our strict ethical principles and high standards of AI safety – reinforced by a dedicated R&D team and leading academic research.

Fair

Robust

Explainable

Private

Our academic research into AI Safety

Asymmetric Shapley values: incorporating causal knowledge into model-agnostic explainability

A mathematically justified method for incorporating causality into Shapley values, overcoming a fundamental limitation of current day explainability. Published in NeurIPS.

Shapley explainability on the data manifold

A demonstration of how to calculate Shapley values, the most commonly used method to explain black-box models, without making the flawed assumption that features in the data are independent, developing the first general, scalable approach to correctly calculating Shapley values. Published in ICLR.

Human-interpretable model explainability on high-dimensional data

A framework for explaining black-box models in terms of the high-level features (e.g. people’s characteristics, objects) that humans care about, rather than the low-level features (e.g. pixels) that the models use.

Explainability for fair machine learning

The first demonstration of directly explaining bias in AI models. Submitted.

Learning to Noise: Application-Agnostic Data Sharing with Local Differential Privacy

A method to share high-dimensional data with the powerful guarantees of Local Differential Privacy. Submitted.

Our perspective on AI Safety & ethics

follow us

AI Safety in the press

follow us

To find out more about AI Safety and ethics at Faculty, get in touch.