RESOURCES

Our approach to safe and ethical AI

We believe responsible AI is a force for good.
How we apply it will determine the world in which we live.

At Faculty, we design, build and implement AI systems that
help our customers make better decisions. We only deploy AI
that is unbiased, robust, private, explainable and ethical.


AI can tackle real business
challenges – but it can
raise problems, too.

AI systems deployed without sufficient safety and ethical considerations cause problems: bias and discrimination, invasions of privacy, and unexplainable outcomes. These threats jeopardise business performance and reputation.

We’ve been trusted to tackle some of society’s biggest challenges with AI – from helping police crackdown on organised crime to supporting the NHS during the COVID-19 pandemic. Where the
wrong decision impacts society, AI ethics and safety is essential.

Every AI system we deploy has been designed and built to our strict ethical principles and high standards of AI safety – reinforced by a dedicated R&D team and leading academic research.

AI Ethics

Given how impactful our work is, all our staff are
responsible for reflecting on and verifying the ethical
basis of a project before we decide to take it on.

We do this by being very selective about our
customers in the first place, applying our ethical
principles to each project and – if necessary –
by referring a potential project to our Ethics Panel.

Read about our ethical principles & panel


AI Safety

We deploy AI systems supported by our four pillars of AI Safety:

Fair

AI can help make organisations make fairer decisions.

We have checks in place to guarantee our  technology processes information fairly across  sensitive groups so discriminatory models never  make it to production.

Robust

AI that helps make critical decisions must be secure and trustworthy.

Our technology preemptively detects data drifts  and anomalies. Performance is continuously  validated with in-built monitoring.

Explainable

Explainable AI means you will always understand why it produced that prediction.

With intuitive dashboards, we provide  transparency even in the most complex  predictions – enabling more informed decisions.

Private

We always meet the standards of industry data ethics frameworks.

We always uphold privacy and data rights,  following rigorous practices of data security. We  pseudonymise private and sensitive datasets and ensure data is handled securely.


Our approach to building ethical, safe AI systems

We ensure AI Safety is embedded within every stage of the development lifecycle.

Development

Our data science teams implement explainability and robustness directly into the model to help users understand what drives predictions.

Validation

Built-in explainability fulfils regulatory requirements and flags biased models.
We also ensure the system is robust to
produce high-quality outcomes.

Predictions

Users receive explanations and a credibility score with every prediction made, so teams can always trust predictions and confidently action them.

Monitoring

Our teams maintain the model by monitoring explainability, robustness, potential bia as well as pre-empting data drifts before they impact prediction quality.

Our academic research into AI safety

Asymmetric Shapley values: incorporating causal knowledge into model-agnostic explainability

A mathematically justified method for incorporating causality into Shapley values, overcoming a fundamental limitation of current day explainability. Published in NeurIPS.

Shapley explainability on the data manifold

A demonstration of how to calculate Shapley values, the most commonly used method to explain black-box models, without making the flawed assumption that features in the data are independent, developing the first general, scalable approach to correctly calculating Shapley values. Published in ICLR.

Human-interpretable model explainability on high-dimensional data

A framework for explaining black-box models in terms of the high-level features (e.g. people’s characteristics, objects) that humans care about, rather than the low-level features (e.g. pixels) that the models use.

Explainability for fair machine learning

The first demonstration of directly explaining bias in AI models. Submitted.

Learning to Noise: Application-Agnostic Data Sharing with Local Differential Privacy

A method to share high-dimensional data with the powerful guarantees of Local Differential Privacy. Submitted.


Our perspective on AI Safety & ethics

blog
Safety from the start: Our journey to building some of the world’s best tooling to make AI safe
blog
Parenting autonomous agents for AI Safety
blog
What is AI Safety?
blog
AI Safety: correcting 200,000 years of human error
blog
AI Safety: The Business Case For Robustness
blog
The business case for AI safety: explainability
blog
Explainable AI: Data science needs a common language for explainability
blog
NeurIPS: New techniques for building more explainable AI
blog
New principles for AI: Generalising AI models

AI Safety in the press