Developed from cutting-edge research, Faculty Safety is an industry-leading technology suite for building trustworthy AI.
Understand and trust your AI models
Organisations need to understand and trust their AI. Not knowing how to act on a prediction, accidentally making a biased decision, leaking sensitive data, or simply not understanding how to optimise your model’s performance can leave significant value on the table, or even lead to serious business harm.
Faculty Safety allows you to solve your most important business challenges with the assurance that your AI models are always performing safely and to the best of their ability.
What makes a safe, high-performing AI model?
ExplainableCan I understand my model?
FairWill my model do the right thing?
RobustWill my model perform well in the real world?
PrivateDoes my model keep data private?
Software and custom solutions for each stage of the machine learning lifecycle
License our AI Safety engines directly or work with us to build your own safe, high-performing and reliable custom systems.
A Python package providing an implementation of Faculty’s proprietary AI Safety technology.
Out of the box
Gain access to ready-built machine learning libraries and train your team to use our industry-leading AI Safety technology.
Dashboards, reports and custom-developed products integrated in the customer’s organisation and workflow.
Build a solution around your people, processes and unique business challenges. Create outputs in a language that fits your organisation, so that your teams are able to understand and act on the insights.
Faculty Safety helps you to
- Understand the weaknesses in your models and how to improve them
- Avoid reputational damage by making models compliant
- Gain confidence that your deployed models are always working as they should be
- Empower end users to make better and more impactful decisions by helping them understand and trust model outputs
Adapted to run in any modelling framework
Faculty Safety treats your machine learning model as a black box, meaning that it’s able to work with a wide variety of model types across many modelling frameworks.
Beyond academic or open-source tools and libraries
Our original research has proven that current open-source methods like LIME & SHAP (and software solutions based on these techniques) often deliver inaccurate results and generate fictitious data that leads to incorrect or unethical conclusions.
Our tooling goes beyond the open source standard on the market, both in terms of mathematical legitimacy and capability. Faculty Safety generates accurate explanations quickly, incorporates causality, and applies to complex model types (e.g. any-type tabular data, computer vision).
An incorrect explanation is worse than no explanation at all
Built for you by our world-class machine learning scientists
Our AI Safety technology is built by our world-class team of scientists, working at the frontier of machine learning research. We combine the latest research with our experience completing AI projects across a breadth of industries.
PhD scientists and engineers contributed to Faculty Safety
Top PhD awards from Harvard and Cambridge
Scientific papers presented, many of which are core developments in the field of AI Safety
Applied AI projects delivered by an AI team of more than 50 PhDs
news & insights
28 October 2019
The business case for AI safety: explainability
7 November 2019