In April, the EU became one of the few governmental bodies to propose regulation for artificial intelligence. That draft included the control of high-risk uses of the technology, such as systems that evaluate creditworthiness, justice and recruitment. 

The demand for this kind of regulation is not unfounded. There are many reasons that executives may be hesitant to adopt AI technologies for particular applications. Certainly for those that are the most sensitive, have a greater risk of producing the wrong outcome, or open to accusations of bias, the risk has been deemed too high for many. Executives have seen examples where other companies have gone down the automation route in these types of applications and been badly burned.  A lack of trust in the robustness of the technology and the highest standards required for accurate decision making in these instances, have become barriers to innovation in this area. But as well as ensuring that you have the very highest safety standards built in to the AI technology, there is a not-so-new way to take the ‘risk’ out of ‘high risk’. It’s called ‘human-in-the-loop’ (HITL).

Human-in-the-loop is a branch of AI that brings together AI and human intelligence to create machine learning models. It’s when humans are involved with setting up the systems, tuning and testing the model so the decision-making improves, and then actioning the decisions it suggests. 

The tuning and testing stage is crucial. It’s what makes AI systems smarter, more robust and more accurate. 

Human-in-the-loop also refers to the need to have a certain degree of human supervision in fields where errors can cost much more than just profits. Intelligent automation (where AI and automation join forces) best serves simple situations, like reminding the staff of an online retailer when inventory levels are low. But prioritising hospital beds for the most at-risk patients, for example, is a more complex decision that requires a lot more input from a human decision-maker. 

With human-in-the-loop machine learning, businesses can enhance and expand their capabilities with trustworthy AI systems whilst humans set and control the level of automation. So, why has HITL become so important? And how can businesses find the right balance between humans and machines?

AI has become essential across industries 

Machine learning has become an increasingly useful tool for businesses across a range of sectors. From modern startups in Fintech and eCommerce to established processes in healthcare and supply chain management, decision-making is slowly being influenced more and more by machines. 

In fact, 75% of executives claim they fear their business is going to go under if they don’t invest in AI soon. 

As more processes are automated and enhanced with machine learning, AI has moved on from being a competitive advantage. Now it’s a necessity. That means to retain the competitive advantage, the AI your business adopts needs to be better, faster and more accurate than everyone else operating in your industry. All the while being able to evolve with disruption in the wider technology landscape. 

The problem is, machine learning takes time to achieve a certain level of accuracy. It needs to process lots of training data to learn over time how to make decisions, potentially delaying businesses that are adopting it for the first time. Human-in-the-loop machine learning gives your AI software the chance to shortcut the machine learning process. With human supervision, the ML can learn from human intelligence and deliver more accurate results despite a lack of data.

That means having human-in-the-loop ML ensures your AI system learns and improves its results faster. So, you can utilise it quickly after adoption. 

Poor AI decision-making can be costly for businesses 

Making the wrong decision doesn’t end too well in any industry. That’s why so many companies use AI to do it for them. But when they rush the AI onboarding process and are overloaded with inaccurate results, businesses don’t just miss out on opportunities to leverage data to their advantage.

They sacrifice profits, too.

For example, if a forecasting tool made an inaccurate prediction for minimum inventory for an online retailer, they could be unable to fulfil orders. Obviously, this would damage their revenue and reputation and risk the trust of their customers. 

On the other hand, a human-in-the-loop can spot problems in the technology before they are deployed at scale. Bias is one example of a problem that can go on to generate long-term damage with amplified negative impacts. Take the infamous Apple credit card: back in 2019, the tech company was swimming in negative press after bias in its machine learning models gave women much lower credit limits than men – even where women earned more money.

AI tools are normally employed to take over an existing function and therefore must meet certain benchmarks. But for high-risk use cases and industries, balancing human and machine collaboration is a bit trickier.

So, how much should a human be in the loop?

AI is really good at taking care of the simple stuff, like recommending a new Spotify playlist or being your personal assistant (e.g. Siri or Alexa). Humans always execute the final outcome the system recommend and are therefore always in the loop. But that’s not the end of the discussion. Intelligent automation  – where AI makes the decision and executes it – will require some element of human oversight. 

This brings up the real conundrum: how much the human should be in-the-loop? 

There’s a range of factors that help determine the answer:

How complex the decision is

How much damage the wrong decision could cause

How much human domain knowledge (i.e. specialist knowledge) is required in the field

All of these individual factors help define the level of risk associated with the decision. If the AI was in complete control of automating a high-risk process and didn’t have a sufficient level of accuracy, the potential threat to human wellbeing and business security would be irresponsible. 

And of course, certain industries, such as healthcare, will always require human domain knowledge no matter how exceptional the technology. 

For example, automating bed allocation in hospitals can be aided by AI systems, but a human-led review of the decisions it makes would still be essential. Technology can analyse prioritisation and diagnoses, but the risk of bias and inability to effectively decide who is “more ill” is best left to the human doctors. 

Other industries and use cases that fit within this framework include finance, insurance and security. 

The diagram below demonstrates the challenge of balancing complexity with the kind of knowledge we need. When it comes to verifying documents, for example, a human-in-the-loop isn’t just needed for the intensive training data for the tool, but also to monitor regularly that the intelligent automation is correctly verifying and rejecting documents. If the documents being verified are sensitive, such as immigration documents, this will require much more human oversight.

This is not to say AI isn’t as valuable or, rather, isn’t to be trusted with the more complex tasks. Rather, it suggests that AI can still give us a helping hand with these problems. It can take care of the more simple processes so we can apply our human knowledge faster, more effectively and more accurately – the same improvements that we see in AI when we introduce a human-in-the-Loop.

This symbiotic relationship can generate the continuous improvement that drives future innovation. And this fulfils what many people think of as artificial intelligence’s destiny: to be a natural extension of human intelligence, co-existing to help humans and organisations make wiser decisions.


To find out more about how Faculty builds robust and fair AI systems, get in contact with our team. 


Recent Blogs

Subscribe

Subscribe to our newsletter and never miss out on updates from our experts.