Our approach to safe and ethical AI
We believe responsible AI is a force for good in the world, and how we apply it in the next few years will determine
the world in which we live. At Faculty, we design, build and implement AI systems that help our customers make systematically
better decisions. We only deploy AI that is unbiased, robust, private, explainable and ethical.

We’ve seen first-hand the opportunity AI can provide to tackle real business challenges - but it can raise problems, too.
AI systems deployed without sufficient safety and ethical considerations can generate a range of problems: bias and discrimination, invasions of privacy, and unexplainable and unreliable outcomes. These threats can leave significant value on the table and jeopardise business performance and reputation.
We’ve been trusted to tackle some of the biggest challenges of this generation with AI – from helping police tackle organised crime to supporting the NHS during the COVID-19 pandemic. Where the wrong decision can generate serious impacts for society, AI ethics and safety is even more essential.
Every AI system we deploy has been designed and built to our strict ethical principles and high standards of AI Safety – all reinforced by a dedicated R&D team and leading academic research.
AI ethics
We only take on projects that we can be sure will be used for ethical purposes. Our in-house ethics panel guarantees
all of our work adheres to our principles by raising and exploring potential ethical questions associated with projects.
They make sure the algorithms we produce are deployed ethically and fairly.
We realise that the world is dynamic and ever-changing. We have learnt a lot already and will continue to learn,
and will adapt our principles and our approach with humility and a commitment to delivering ethical AI for good, for everyone.

Our ethical AI principles
- We never work on projects that are exploitative or cause harm
We never take on projects which have the aim or effect of exploiting vulnerability. - We respect the laws and regulations for building AI systems
We respect every letter of the law and regulation from privacy to data protection.
- We don’t do political work
We have learnt that political work can be divisive, so we don’t do it. - We will never risk our scientific credibility
We are a company of scientists. We are led by science, research and innovation and will not take on projects with scientifically dubious goals.
AI Safety
We deploy AI systems supported by our four pillars of AI Safety.
-
Fair
AI can help make organisations make fairer decisions.
We have checks in place to guarantee our technology processes information fairly across sensitive groups so discriminatory models never make it to production.
-
Robust
AI that helps make critical decisions must be secure and trustworthy.
Our technology preemptively detects data drifts and anomalies. Performance is continuously validated with in-built monitoring.
-
Explainable
Explainable AI means you will always understand why it produced that prediction.
With intuitive dashboards, we provide transparency even in the most complex predictions - enabling more informed decisions.
-
Private
We always meet the standards of industry data ethics frameworks.
We always uphold privacy and data rights, following rigorous practices of data security. We pseudonymise private and sensitive datasets and ensure data is handled securely.

Fair

Robust

Explainable

Private

Asymmetric Shapley values: incorporating causal knowledge into model-agnostic explainability
A mathematically justified method for incorporating causality into Shapley values, overcoming a fundamental limitation of current day explainability. Published in NeurIPS.

Shapley explainability on the data manifold
A demonstration of how to calculate Shapley values, the most commonly used method to explain black-box models, without making the flawed assumption that features in the data are independent, developing the first general, scalable approach to correctly calculating Shapley values. Published in ICLR.

Human-interpretable model explainability on high-dimensional data
A framework for explaining black-box models in terms of the high-level features (e.g. people’s characteristics, objects) that humans care about, rather than the low-level features (e.g. pixels) that the models use.

Explainability for fair machine learning
The first demonstration of directly explaining bias in AI models. Submitted.

Learning to Noise: Application-Agnostic Data Sharing with Local Differential Privacy
A method to share high-dimensional data with the powerful guarantees of Local Differential Privacy. Submitted.
Our approach to building ethical, safe AI systems
We ensure AI Safety is embedded within every stage of the development lifecycle.
Our data science teams implement explainability and robustness directly into the model to help users understand what drives predictions.
Built-in explainability fulfils regulatory requirements and flags biased models. We also ensure the system is robust to produce high-quality outcomes.
Users receive explanations and a credibility score with every prediction made, so teams can always trust predictions and confidently action them.
Our teams maintain the model by monitoring explainability, robustness, potential bias as well as pre-empting data drifts before they impact prediction quality.
To find out more about AI Safety and ethics at Faculty, get in touch.