We believe responsible AI is a force for good.
How we apply it will determine the world in which we live.
At Faculty, we design, build and implement AI systems that
help our customers make better decisions. We only deploy AI
that is unbiased, robust, private, explainable and ethical.
AI can tackle real business
challenges – but it can
raise problems, too.
AI systems deployed without sufficient safety and ethical considerations cause problems: bias and discrimination, invasions of privacy, and unexplainable outcomes. These threats jeopardise business performance and reputation.
We’ve been trusted to tackle some of society’s biggest challenges with AI – from helping police crackdown on organised crime to supporting the NHS during the COVID-19 pandemic. Where the
wrong decision impacts society, AI ethics and safety is essential.
Every AI system we deploy has been designed and built to our strict ethical principles and high standards of AI safety – reinforced by a dedicated R&D team and leading academic research.
Given how impactful our work is, all our staff are
responsible for reflecting on and verifying the ethical
basis of a project before we decide to take it on.
We do this by being very selective about our
customers in the first place, applying our ethical
principles to each project and – if necessary –
by referring a potential project to our Ethics Panel.
We deploy AI systems supported by our four pillars of AI Safety:
We have checks in place to guarantee our technology processes information fairly across sensitive groups so discriminatory models never make it to production.
Our technology preemptively detects data drifts and anomalies. Performance is continuously validated with in-built monitoring.
With intuitive dashboards, we provide transparency even in the most complex predictions – enabling more informed decisions.
We always uphold privacy and data rights, following rigorous practices of data security. We pseudonymise private and sensitive datasets and ensure data is handled securely.
We ensure AI Safety is embedded within every stage of the development lifecycle.
Our data science teams implement explainability and robustness directly into the model to help users understand what drives predictions.
Built-in explainability fulfils regulatory requirements and flags biased models.
We also ensure the system is robust to
produce high-quality outcomes.
Users receive explanations and a credibility score with every prediction made, so teams can always trust predictions and confidently action them.
Our teams maintain the model by monitoring explainability, robustness, potential bia as well as pre-empting data drifts before they impact prediction quality.
A mathematically justified method for incorporating causality into Shapley values, overcoming a fundamental limitation of current day explainability. Published in NeurIPS.
A demonstration of how to calculate Shapley values, the most commonly used method to explain black-box models, without making the flawed assumption that features in the data are independent, developing the first general, scalable approach to correctly calculating Shapley values. Published in ICLR.
A framework for explaining black-box models in terms of the high-level features (e.g. people’s characteristics, objects) that humans care about, rather than the low-level features (e.g. pixels) that the models use.
The first demonstration of directly explaining bias in AI models. Submitted.
A method to share high-dimensional data with the powerful guarantees of Local Differential Privacy. Submitted.