What is ethical AI?

Is it a Hollywood-bound robot that’s thoughtful enough to understand the difference between right and wrong, and then take an informed decision?

2022-02-25AI Ethics

Is it a Hollywood-bound robot that’s thoughtful enough to understand the difference between right and wrong, and then take an informed decision?

For a lot of business leaders, this image isn’t too distant from reality: AI is typically used to automate low risk processes or to augment and support managers in their ability to make intelligent decisions in areas with greater risk.

As AI models are given increasing control, business leaders want to make sure they don’t lose theirs. But while we’re all familiar with the risks and threats of unethical and unsafe AI, most business leaders aren’t so sure how to mitigate them. The answer? Ethical AI.

Let's start with the definition

Ethical AI is when AI systems are designed and deployed in ways that consider the harms associated with AI systems and has mitigated them. These harms include bias and discrimination, unsafe or unreliable outcomes, unexplainable outcomes and invasions of privacy.

Whether an AI system is ethical comes down to (a) the purpose of the AI system as intended by the humans designing, deploying and using it, and (b) whether the AI system operates in a way that doesn’t cause harm i.e. fair, explainable and transparent.

For instance, if an online gambling firm built a tool that identifies vulnerable people to increase their profits, the use of the technology would be unethical. Or, if a recruitment agency deployed software that was unfairly progressing more male candidates than women, the model itself would be unethical.

A lot of organisations prefer to use the term ‘responsible AI’. This is when both the ethical and legal responsibilities of building and using AI systems are satisfied. Sitting underneath this broad umbrella term are a handful of other concepts:

• AI safety: the endeavour to ensure AI is deployed in ways that don’t harm humanity

• Data ethics: the evaluation of collecting, gathering, analysing and disseminating data in an ethical way

• AI ethics: ethical AI is the practice of AI ethics, a set of techniques, values and principles that guide moral conduct in the development, deployment and use of AI technologies

• Machine ethics: the branch of AI ethics concerned with the moral behaviours of AI agents

• Roboethics: designing the AI system so it responds to situations in an ethical way

Ethical AI is important because AI is now important

Sophisticated AI systems are being deployed to solve complex issues across business and society. But as AI expands into new use cases, it raises new questions, too.

Facial recognition technology can prove useful when deployed to track down suspects or identify missing persons. But what if it’s deployed for nefarious purposes by authoritarian governments? And how can the developers practice data privacy if they’re collecting data (i.e. taking people’s pictures) without consent?

The reason ethical AI matters right now is because AI is mainly used as a force for disruption. Its novel use cases mean ethical issues can’t always be foreseen or mitigated. Coupled with the speed at which AI can perform processes or help make decisions means it can deploy negative outcomes, such as discrimination, at scale. And, without sufficient monitoring, this can easily go unnoticed.

For organisations looking to break new ground in their industry with AI, taking ethical AI into consideration is essential. After all, this is where the greatest risks lie.

Disruptive organisations need ethical AI

AI that isn’t ethical can break regulations. It can harm users and infringe on their privacy. It can discriminate against your customers. It can lead to poor decision making. It can damage your business’ reputation. But, most importantly, it can have serious consequences for society.

These risks have been well documented by the press, with bias and data protection dominating headlines over responsible AI. Consumers are now just as aware of the impact of unethical AI: it’s their data that’s mishandled and it’s their customer experience that’s affected.

The media coverage is not blown out of proportion: according to one Capgemini Research Institute survey, 90% of C-suite executives have reported at least one instance where their AI was unethical. And 60% of them have been subject to legal scrutiny as a result. But this brings us to the challenge of practising ethical AI.

‘Legal’ doesn’t mean ‘ethical’. It’s not illegal for marketers to create and share deepfake videos online to deliver a more powerful campaign, and it’s not illegal for social media platforms to use algorithms that disproportionately show young women vast quantities of content about dieting after liking a single post about fitness.

Regulations set a minimum threshold of what’s acceptable. But deciding what’s ethical comes down to you.

How organisations should practice ethical AI

Here’s the thing: ethical AI isn’t really about AI. It’s about ethics. Your organisation probably already has its own set of ethics framed in principles, practises and codes of conduct – all of which ensure you and your team refrain from doing things that could have a negative impact on people.

By that logic, your AI as something you use in your organisation should already be ethical. But the problem for organisations is enforcing unique considerations to prepare for the unique risks posed by AI.

  1. You need to define a clear set of ethical AI principles: C-suite executives and IT leaders need to address the risks specific to their sector, i.e. industrial manufacturers need to focus on safety and reliability, but for consumer-facing organisations, equity and fairness take priority.

  2. You need to educate your employees on managing AI: Training your employees in how to monitor and manage their AI ensures it operates safely and has ethical outcomes, i.e. HR needs to focus on bias in recruitment screening tools, whereas marketing needs to pay attention to data privacy rules and risks.

  3. You need to nail down AI safety: By ensuring all AI systems deployed are fair, explainable and follow rigorous privacy standards, potential ethical issues can be prevented before they take place. It’s like putting technical guardrails in place.

But there is another option: partner with AI providers, like Faculty. We only build responsible AI that follows a rigid set of ethical AI principles and satisfies our high standards of AI safety. And by partnering with an organisation, we can provide additional support and education on top of the AI models we build and deploy. This means we can build out your data science capability so your team can manage your AI ethically and independently in the future.

To find out more about our approach to safe and ethical AI, get in touch with our team.