search faculty.ai
We believe responsible AI is a force for good.
How we apply it will determine the world in which we live.
At Faculty, we design, build and implement AI systems that
help our customers make better decisions. We only deploy AI
that is unbiased, robust, private, explainable and ethical.
AI can tackle real business
challenges – but it can
raise problems, too.
AI systems deployed without sufficient safety and ethical considerations cause problems: bias and discrimination, invasions of privacy, and unexplainable outcomes. These threats jeopardise business performance and reputation.
We’ve been trusted to tackle some of society’s biggest challenges with AI – from helping police crackdown on organised crime to supporting the NHS during the COVID-19 pandemic. Where the
wrong decision impacts society, AI ethics and safety is essential.
Every AI system we deploy has been designed and built to our strict ethical principles and high standards of AI safety – reinforced by a dedicated R&D team and leading academic research.
Given how impactful our work is, all our staff are responsible for reflecting on and verifying the ethical basis of a project before we decide to take it on.
We do this by being very selective about our customers in the first place, applying our ethical principles to each project and – if necessary – by referring a potential project to our Ethics Panel.
We deploy AI systems supported by our four pillars of AI Safety:
We ensure AI Safety is embedded within every stage of the development lifecycle.