When we consider the role of AI, many think of a tool that gives one-way insight, determined by a fixed dataset. Based on the information it has, it generates an outcome. Yet sales teams, planners, and business leaders may recognise, at times, that their AI does not have sight of a recent business deal or other macro shift (supply chain issues, Covid-19) that has ultimately influenced the accuracy of the output generated.

This can lead to a ‘human knows best’ mentality which sees machine insights disregarded in favour of the human view. The issue here is there is no single occasion where a human – or machine – can be 100% correct. So, by disregarding an AI output entirely, the human hasn’t gained any new (and informed) perspective on the decision they are making, and may have missed some useful insight that the model has offered.

A two-way relationship

Instead, when we consider the role of AI, we must think of a two-way relationship – a continuous feedback loop between humans and the machine. A machine can generate accurate insights – as it has been built to do – the large majority of the time, and a human can iterate and improve with additional contextual information as it appears. For the remaining few instances where the outcome is incorrect, it’s often because new qualitative data needs to be accounted for.

Importantly, this isn’t a case of building a better model in the development stage.  There will always be new and changing influences to consider which impact the accuracy of a model’s output at any given time. New sales, last-minute business decisions, new partnerships, staff changes and more can all contribute to the accuracy of the AI. Its impact can only be accurately accounted for if a machine is told about it.

Inputting knowledge into a model on an ongoing basis is a process that is subtle, yet important. Feeding in new information enables a model to consider new influences and produce insights that spot unexpected risks and considerations, and help a business avoid pitfalls that were initially missed. Essentially the iteration feedback loop between human and machine focuses on tweaking what is already a good model and accurate prediction, to generate an even better model that offers greater, more specific insight. 

Instead of a binary human-machine relationship – where humans hold the contextual knowledge of the world, and machines work with historical datasets and advanced predictive techniques – a combination of the two enables organisations to work with the most accurate forecasts, and ultimately make the best decisions. The continuous feedback loops between human and machine offer models much-needed resilience when operating in changing environments. And it means that both internal and external datasets and indicators, as well as human knowledge, all play their part in improving organisational performance. 

Translating human insight into quantitative data

Correctly iterating a machine learning model in this ‘feedback loop’ is a nuanced process. On occasion, teams must determine whether a machine’s prediction is most accurate, or whether feeding in a person’s contextual insight is needed to give the computer greater accuracy.

For the most part, it should be relatively easy to determine whether wider – or qualitative – context will be of benefit to a machine. Take the modelling of the Alpha and Delta variants of Covid-19, for example. In the early days of the Delta variant, health organisations gained a reasonable understanding of what it was going to look like – e.g. greater transmissibility. Of course, machines already modelling the Alpha variant did not know of these variables, and would therefore be unable to accurately predict the long term impact of Delta without receiving the relevant insight.

But how do we feed this qualitative, contextual data into a machine learning model? Qualitative insights are built into Bayesian models by adjusting the relevant priors to reflect the current state of information available. For example, despite a lack of historical data on the Alpha variant of Covid-19 versus the first strain, modelling the Alpha variant was still possible. Data scientists taught the Bayesian model to consider rough estimates about the transmissiveness of Alpha, before adjusting these estimates and making them more accurate as further information became available. Translating these qualitative insights into the machine’s language as they evolved therefore helped the model make a better educated guess on what the forecast looked like.

Co-creating better predictions 

Ensuring that machine learning models generate accurate output relies on a continuous feedback loop between human and machine. Many people argue for ‘human-in-the-loop’ when it comes to AI, and that is important. But an optimum process is where humans and machines co-create and refine the loop together. Combining knowledge and insights in equal parts, and translating qualitative insights into actionable data, provides opportunity for organisations to work with the best possible prediction models.


Combining human and machine intelligence is at the heart of our decision intelligence software, Frontier. Click here to find out how it’s helping organisations unlock deeper insights, more accurate forecasts and ultimately better decision making. 


Recent Blogs

Subscribe

Subscribe to our newsletter and never miss out on updates from our experts.