Bias in AI: how artificial intelligence can expose impartiality in human decision making
In late 2021, the New York City Council passed a new measure into law, aiming to curb the gender and racial bias heavily associated with AI-enabled, automated recruitment tools by subjecting them to a ‘bias audit'.
In late 2021, the New York City Council passed a new measure into law, aiming to curb the gender and racial bias heavily associated with AI-enabled, automated recruitment tools by subjecting them to a ‘bias audit’. And it’s about time.
In 1988, a British medical school was found guilty of discrimination in their recruitment process. It was discovered women and people with non-European names were far less likely to make the interview stage when processed by their computer programme. But some 30 years later, Amazon’s own AI-enabled tool made a similar mistake when recruiting for software developers, directly penalising CVs containing the word “women’s”.
AI’s capabilities may have advanced in those three decades, but the risks appear stalwart. As more business leaders leverage decision making across recruitment, operations and customer service, headlines have continued to call out its potential for discrimination. AI bias produces skewed, inaccurate results – and it equips bad decision-making at scale.
Contrary to belief, however, AI can actually help expose and eradicate bias in decision making. So, what is really causing bias in AI systems? And how can AI be used to make decision making – and even entire organisations – fairer?
The problem isn't the technology - it's us
The negative press swelling is often concerned with the impacts of AI bias, but the causes don’t get much coverage. There are two main drivers of bias in AI: data bias and societal bias.
Data bias is when algorithms are trained using biased data and therefore are not a fair representation of the population. When Amazon admitted it was unable to build a fair recruitment tool, it was because the algorithm was trained on data from CVs sent to the company across the 10 years – a majority of which were from men.
Societal bias is when AI behaves in a way that reinforces biases that already exist in society, from social intolerances to institutional discrimination. As recently as March 2021, a US regulator confirmed Apple’s credit card wasn’t actually unfairly discriminating against women by giving them far lower credit limits – women historically had lower credit limits than men.
So, your technology is no more biased than any member of staff. The data we feed in and the way we train AI can generate bias in our systems. But the difference is, unlike with humans, we can detect, expose and correct these biases with explainable AI.
Explainability can help detect bias in AI and our decision making
By building explainability into an AI system, we can discover how predictions are made. This allows us to detect bias within AI and understand which characteristics influence decisions, such as gender, ethnicity or location. But that’s as far as explainability can go – it can’t address bias directly. To mitigate bias, there is a cluster of ‘fairness’ techniques that are required.
Faculty recently proposed a new framework that actually combines these explainability and fairness techniques, and can single-handedly identify and reduce bias in AI.
Unlike AI bias, human bias is often unconscious and near-impossible to probe, mainly because we aren’t always aware of the factors affecting our decision making, and they certainly aren’t recorded in data for us to analyse. But when we work with AI to deliver fair outcomes, we can substantially lower the levels of bias.
We need to work together with AI to make fairer, better decisions
Removing bias from decision making requires humans and machines to collaborate more than ever before: we can monitor outcomes to ensure AI produces fair outcomes, and in turn enable fairer decisions.
This kind of symbiotic relationship is called ‘human in the loop’ – it involves decision makers setting ethical standards for fairness and monitoring outcomes so AI systems can safely be deployed. But ultimately, the control lies with your team. Here’s how you can ensure your AI systems stay bias-free:
Set the standards of fairness for your AI systems: only decision makers can determine if an AI system is safe to use. A shared set of principles that defines “safe” is essential.
Prioritise building or adopting explainable AI: understanding why your technology has made that decision is key to determining when your AI is being “safe” and is ready to use.
Regularly monitor your AI: run experimental tests with your data and AI systems to ensure it is not being impacted by data or societal bias.
Build more diverse teams: the way teams handle data can impact algorithms. By emphasising diversity, you include new perspectives that are less likely to overlook bias and what might be causing it.
AI systems need to be monitored regularly and maintained properly to enable fair and representative decision-making. AI done well can incorporate fairness directly into your business. AI done badly – without such considerations – can easily be detrimental to wider society.
Get in touch to find out more about how Faculty builds AI systems reinforced with AI safety.