Arrow DownArrow ForwardChevron DownDownload facebookGroup 2 Copy 4Created with Sketch. linkedinCombined-Shape mailGroup 4Created with Sketch. ShapeCreated with Sketch. twitteryoutube

Frontier decision-making is here

Find out more

We’ve seen first-hand the opportunity AI can provide to tackle real business challenges - but it can raise problems, too.

AI systems deployed without sufficient safety and ethical considerations can generate a range of problems: bias and discrimination, invasions of privacy, and unexplainable and unreliable outcomes. These threats can leave significant value on the table and jeopardise business performance and reputation.

We’ve been trusted to tackle some of the biggest challenges of this generation with AI – from helping police tackle organised crime to supporting the NHS during the COVID-19 pandemic. Where the wrong decision can generate serious impacts for society, AI ethics and safety is even more essential.

Every AI system we deploy has been designed and built to our strict ethical principles and high standards of AI Safety – all reinforced by a dedicated R&D team and leading academic research.

Our ethical AI principles

  • We never work on projects that are exploitative or cause harm
    We never take on projects which have the aim or effect of exploiting vulnerability.
  • We respect the laws and regulations for building AI systems
    We respect every letter of the law and regulation from privacy to data protection.
  • We don’t do political work
    We have learnt that political work can be divisive, so we don’t do it. 
  • We will never risk our scientific credibility
    We are a company of scientists. We are led by science, research and innovation and will not take on projects with scientifically dubious goals.

 

Fair

Robust

Explainable

Private

Our academic research into AI Safety

Asymmetric Shapley values: incorporating causal knowledge into model-agnostic explainability

A mathematically justified method for incorporating causality into Shapley values, overcoming a fundamental limitation of current day explainability. Published in NeurIPS.

Shapley explainability on the data manifold

A demonstration of how to calculate Shapley values, the most commonly used method to explain black-box models, without making the flawed assumption that features in the data are independent, developing the first general, scalable approach to correctly calculating Shapley values. Published in ICLR.

Human-interpretable model explainability on high-dimensional data

A framework for explaining black-box models in terms of the high-level features (e.g. people’s characteristics, objects) that humans care about, rather than the low-level features (e.g. pixels) that the models use.

Explainability for fair machine learning

The first demonstration of directly explaining bias in AI models. Submitted.

Learning to Noise: Application-Agnostic Data Sharing with Local Differential Privacy

A method to share high-dimensional data with the powerful guarantees of Local Differential Privacy. Submitted.

Our perspective on AI Safety & ethics

follow us

AI Safety in the press

follow us

To find out more about AI Safety and ethics at Faculty, get in touch.