background placeholder

AI Ethics & Safety

Our approach to safe and ethical AI

We believe responsible AI is a force for good. How we apply it will determine the world in which we live.

Introduction

AI can tackle real business challenges – but it can raise problems, too.

AI systems deployed without sufficient safety and ethical considerations cause problems: bias and discrimination, invasions of privacy, and unexplainable outcomes. These threats jeopardise business performance and reputation.

We’ve been trusted to tackle some of society’s biggest challenges with AI – from helping police crackdown on organised crime to supporting the NHS during the COVID-19 pandemic. Where the wrong decision impacts society, AI ethics and safety is essential.

Every AI system we deploy has been designed and built to our strict ethical principles and high standards of AI safety – reinforced by a dedicated R&D team and leading academic research.

AI Ethics

Given how impactful our work is, all our staff are responsible for reflecting on and verifying the ethical basis of a project before we decide to take it on.

We do this by being very selective about our customers in the first place, applying our ethical principles to each project and – if necessary – by referring a potential project to our Ethics Panel.

Four Pillars of AI Safety

AI can help make organisations make fairer decisions.

We have checks in place to guarantee our technology processes information fairly across sensitive groups so discriminatory models never make it to production.

Our approach to building ethical, safe AI systems
We ensure AI Safety is embedded within every stage of the development lifecycle.
Development

Our data science teams implement explainability and robustness directly into the model to help users understand what drives predictions.

Validation

Built-in explainability fulfils regulatory requirements and flags biased models. We also ensure the system is robust to produce high-quality outcomes.

Predictions

Users receive explanations and a credibility score with every prediction made, so teams can always trust predictions and confidently action them.

Monitoring

Our teams maintain the model by monitoring explainability, robustness, potential bia as well as pre-empting data drifts before they impact prediction quality.