The what, the why, and the how of responsible AI are fundamental questions behind our ethical approach. We believe AI has the power to change everything, so it’s key that everyone from the engineers to the end users can trust the AI to be responsible.
Aiming for responsible AI is key to helping companies achieve more responsible decision making processes.
As I discussed at the TechEx 2021 AI & Big Data conference, our principles at Faculty of fairness, explainability, privacy, and robustness underpin our effort for responsible and ethical AI application. It’s useful to distinguish between the possibilities of artificial general intelligence, which often provokes concern, and AI’s current capabilities. Right now, what organisations really need to have answers to is fairness, explainability, privacy and robustness of the AI. The reality of AI now is that it improves efficiency and decision making, from helping banks to make more accurate risk calculations to helping hospitals forecast capacity needs.
“What organisations really need to have answers to is fairness, explainability, privacy and robustness.’’
The need for fairness, explainability, and privacy are clear and well-documented, yet robustness is often the least known due to confusion around its meaning. Robustness is the endeavour to know when you can trust your algorithm, or when you can have confidence in it. It asks where is it likely to go wrong?
There are many different ways AI can go wrong, such as changes to data distribution which mean that the model is no longer relevant. It’s reasons such as this that highlight the necessity of robustness and why it is integral to our ethics.
The path to AI ethics
The lifecycle of AI provides a useful framework to understand the path to responsible AI from the scientist to the user. Success depends upon all being able to understand and trust the process given that it will inform decision making. From developing the system and ensuring the model is fair and explainable, to the validation process, and then going into production and the need of trust that comes with that.
Take a prediction tool, it needs to be trustworthy and explainable to overcome concerns around the reliability and reasoning of the algorithm. The user needs to get the nuance, not just the prediction, because the tool will inform decision making. This is why the monitoring process which occurs after production is key to assess how robust the AI is when in use and how the model could be further improved.
AI’s robustness is often only as good as its explainability. Take for instance the prospect of human error interfering with the AI because the user simply does not understand the application. AI is too important to be left only to technologists as end users need an understanding of the AI if it is to be trusted and used correctly in its real-world applications.
The real-world need for ethical AI
The real-world applications of AI underscore the need for stringent ethical consideration. Big investment banks have a huge amount of cost in compliance, which is usually divisions of people that do manual tasks such as checking risk and using credit information to approve customers. Bank employees often only update risk scores every three years or so because of the time it takes. AI has the possibility to change this by offering perpetual assessment of risk on customers by using data efficiently. This improves compliance operation and offers continuous accurate information at a rate not realistically possible by human work alone with the capability to remove bias when accounting for fairness.
“Going after responsible AI gives you, likely, net more responsible decision making processes than what is humanly possible.’’
Responsible AI being built and deployed can absolutely address and overcome many of the problems being faced today. A bank can now use an algorithm and responsibly make processes more accurately than they currently are. It’s also key to recognise that humans make biased decisions often without stringent moderation to avoid this. AI offers the chance to go beyond the level of human decision. From overcoming bias to improved accuracy, AI can offer more responsible decision making processes than what is humanly possible.