We were delighted to see the publication of the Centre for Data Ethics and Innovation’s ‘Review into bias in algorithmic decision-making’ last month.  

We hope this will be the  beginning of a new era of AI safety governance, one that backs up its ethical recommendations with clear, technical guidelines for making safe AI a reality.  

Faculty was appointed by a competitive tender process to support the Review with an underpinning technical report, which you can read in full here. This included in-depth analysis of different bias mitigation strategies, writing detailed technical standards for data science practitioners, and showcasing these in a user-facing app, all published by CDEI. Our aim was to identify the best, most practical approaches that developers on the ground could use to create fairer, safer algorithms.  

At Faculty, we’ve always firmly believed that AI can’t be adopted widely unless it can be implemented safely. Progress and protection must go hand in hand. It’s why we have dedicated significant resources to the field of AI safety, developing practical tools and approaches designed to protect the four key pillars of safe and ethical AI: 

We firmly believe that, to ensure AI safety in the real world, we need practical tools and approaches to be developed, not just sets of principles or frameworks (as important as those might be). We’ve seen more than 50 such sets of AI principles and tech regulations published worldwide in the last few years, but the hard truth is that these principles are often ignored by practitioners. Why? Because ‘principles’ aren’t enough to make safe, fair, ethical AI a reality for real data science teams. Indeed the same is true of data legislation more broadly – for example – the ‘explainability’ provisions in the GDPR. They need more than goals; they need guidance and support on how to achieve those goals.  

The problem has always come down to two seemingly insurmountable difficulties. Firstly, regulators are often accused of lacking the technical expertise needed to provide more than a general direction for data scientists to follow. Secondly, private data science companies are likely to hold the necessary technical expertise, but should not be allowed to independently set standards for what is safe, ethical, and fair.  Instead of restricting itself to general principles, our work with the CDEI takes the next step towards implementation by providing technical standards aimed at practitioners. This includes a clear structure for considering different mathematical definitions of algorithmic bias and a clear range of approaches that practitioners can use to detect and mitigate that bias throughout development and deployment.    

Our work with the CDEI on bias mitigation shows the value of long-term, in-depth partnerships between government and industry. Such partnerships allow institutions like the CDEI to build a complete, realistic picture of the practical steps and decisions that data scientists would need to take in order to put these high level principles into practice. And when concepts like fairness that often appear in those principles, have to be defined precisely and mathematically, that is the opportunity for a fantastically rich conversation between policy makers, organisational leaders, and practitioners.  

The CDEI’s Review is a huge step towards making safe, operational, impactful AI a reality for organisations worldwide. We hope that this is the beginning of greater practical collaboration between government, industry and regulators on what best applied practice for algorithmic fairness, and AI safety more broadly should look like. . We look forward to seeing the public sector – and indeed, the private sector too – making the next leap forward: putting these technical standards into action, improving upon them further, and therefore making safe AI a reality for the UK


Recent Blogs

Subscribe

Subscribe to our newsletter and never miss out on updates from our experts.