At Faculty, we believe organisations shouldn’t wait for government regulation to employ safe AI practices. As well as ethical and reputational benefits, there are clear commercial reasons for building safe practices into your machine learning efforts from the outset. 

With the potential value of artificial intelligence to the UK economy estimated at £200 billion (or 10% of UK GDP) through 2030, it is clear that this technology will have a transformative impact on our society. What is not yet so clear is who will take responsibility for ensuring that this technological transformation is applied safely.

Explainability

One of the core challenges of making AI safe is making AIexplainable’. 

Explainable AI (sometimes known as “XAI”) is a hot term in the field right now. It encompasses a range of approaches for providing “explanations” for model outputs that are understandable by humans, even for black-box modelling techniques.

You may be familiar with some explainability approaches already: if you are not – or if you are not yet embedding these approaches in your data science team’s processes – here are some of the reasons you should be acting right now.

You are leaving value on the table

Generally we find that more complex, powerful models can achieve better performance on a given task, but these more complex models are also harder to interpret, leading to a trade off between performance and interpretability illustrated below.

In many businesses, simple models are often used intentionally for their interpretability properties. While choosing a simple model first is an excellent development practice it can mean that predictive value is left on the table. When these models are deployed at scale this can mean millions of pounds of opportunity cost.

By employing sensible explainability practices we are able to have our cake and eat it too. We can train complex, black-box models to get the best predictive performance and then apply explainability techniques afterwards to provide interpretable explanations of individual predictions – and for the model itself.

When properly developed – these explanations can provide solid evidence and auditability of the underlying decision-making process to a stakeholder, customer or regulator while reducing the need to rely on less-well-performing models.

Your models have unknown failure modes

Explainability can also be highly valuable in improving model development.

During training, models are optimised to perform best overall with regard to a given metric, even though there may be specific areas of the input data space where the model performs badly – blind spots. These blind spots can carry significant risk – particularly as companies deploy portfolios of models which feed each other.

An example of this is shown below. We have trained a predictive model on a data set to predict how likely a driver is to have a collision based on their driving history. We are using Faculty’s explainability tools to visualise one of the model’s features (miles driven per year) versus the impact of that feature on each individual prediction. 

Where we see clusters of incorrect predictions at the edges of the chart we can identify that the model may be underperforming due to extreme values of this input feature. New data with extreme high or low values of this feature has a higher likelihood of being predicted incorrectly.

This insight allows us to revisit our model development process to rectify the issue or build in input-data validation to catch these instances in production.

Explainability provides better visibility of our models’ weak points which allows us to proactively develop stronger models and machine learning pipelines.

Trust is as important as prediction

In deployed AI applications across many different fields the output of a model is sent to a human who uses the prediction to inform a decision. This is especially true in high-stakes and customer-facing applications.

In these instances it is critical to ensure that the human operators trust the model outputs, lest they ignore them and negate the effectiveness of the AI initiative.

Explainability provides a solution to this requirement by allowing you to surface the explanations for a given prediction – in addition to the prediction itself. This additional information helps make the prediction more actionable for the operator and helps build trust in the AI system.

Revisiting our collision prediction example you could envisage the outputs of this model being used by an insurance agent to make a policy decision. In addition to the collision-risk probability, the model could also serve up an explanation plot showing the agent the key features that drove the predicted risk. This additional information means the agent is more likely to trust the model and incorporate the prediction into their final decision. 

Increasing human trust and adoption in AI model outputs will be a key differentiator for companies that are able to employ AI at scale. Explainability will be a powerful tool to help achieve this goal.

You might just have to anyway

For some companies providing explanations of model decisions may already be a legal requirement.

GDPR stipulates that data subjects have the right not to be subjected to a decision based ‘solely on automated processing’ which produces ‘legal or similarly significant effects’ without ‘the right … to obtain an explanation of the decision reached after such assessment’.

Though there has been some debate on what the exact implications of this legislation will be for companies it seems increasingly clear that “protection by design” practices will become the new gold standard for companies who are deploying machine learning into their business.

Protection by design means making AI safety – and therefore explainability – a core part of your data science team’s efforts. We believe that companies should invest in good practices now to get ahead of the game.

Faculty’s approach to explainability

At Faculty our unique combination of research and real world expertise means we are perfectly placed to support you with the journey of implementing explainable AI.

Our work in AI safety and explainability is built on a foundation of better basics and is led by research and sound statistical practices. We have built on existing explainability libraries to make them more computationally tractable, and to incorporate often overlooked but critical features such as causal structures, confidence intervals and semantic feature descriptions.

Our data science workbench – Faculty Platform – incorporates our AI safety tools and is constantly being updated with outputs of our R&D team making it the best place to do safe AI.

Making AI real means making AI safe and the best time to start doing AI safely is today.


If you’re interested in hearing more, or in applying explainability within your business or data science team – drop us a line.


Recent Blogs

Subscribe

Subscribe to our newsletter and never miss out on updates from our experts.