AI Safety: correcting 200,000 years of human error

Amid the dizzying possibilities for the future of artificial intelligence (AI) and data, in recent years somehow we have often ended up worrying about how AI is going to go wrong.

2019-08-26Safety

Amid the dizzying possibilities for the future of artificial intelligence (AI) and data, in recent years somehow we have often ended up worrying about how AI is going to go wrong.

From an AI triggered Armageddon to questioning who our self-driving car should crash into when it has to make the choice, the popular imagination is pretty concerned about the downside of these ‘Weapons of Math Destruction’.

Recently, we’ve stopped thinking about software going wrong and instead just tend to assume that it will. From cyber hacks to Windows crashes, the demonstrated fallibility of software is so obvious that it is hard to see how it can reliably go right.

But more reliable machines are possible and we should strive to improve the safety and performance of machines, as well as their treatment of data privacy. We don’t build bridges, and then think about bolting on safety afterwards; in physical engineering, safety considerations are built in from the start because the price of failure is high. This is also true for software: the harm created by privacy failures, biased or unexplainable results, and software glitches has been covered in headlines for many years.

Human error

Here though, I don’t want to focus on the machines making error, but instead I’d like to look at the less remarked upon, but more common incidence of people getting things wrong. 

• According to the World Health Organisation, more than 1.25 million people die each year as a result of road traffic accidents

• In medicine, the average incidence of diagnostic errors is estimated at 10-15%

• A few years ago, the Mars Climate Orbiter: a $330 million dollar spacecraft was destroyed because of a failure by the scientists to properly convert between metric and imperial units of measurement

• In manufacturing, 23% of all unplanned downtime is the result of human error. In telecoms, it was over 50%

The rise of behavioural economics and cognitive psychology have taught us about over 100 ways in which people can be irrational, or suffer from ‘cognitive biases’. For example, people tend to overestimate the importance of what we know, and under-recognise the effects of what we do not know. We see patterns where there aren’t any. We tend to give weight to more recent events, and confuse chronological order with causation. 

Untended human error is only likely to get worse rather than better in the coming years. The flood of information now produced means we can no longer read and digest all that has been written. Police investigators, intelligence officers, solicitors and criminal barristers have their work cut out. We also increasingly live within social networks and cities too large for our brains to traverse and the complexity of our supply chains and systems of policy and governance are steadily growing too.

Fortunately, we now live at the beginning of an age where help is at hand. AI has come out of the lab, and into our offices, homes, and our pockets.

The possibility of safe, high-performing AI

The opportunity to reduce human error is not because machines are infallible, but that they are more predictable and more transparent. In the popular imagination, algorithms are all ‘black boxes’, but actually, it is the human mind that is the black box. Next to our minds, the structure and operation of machine learning algorithms are considerably more transparent and open to inspection. Our minds are more of a black box than any algorithm.

We can now see that in many areas machines, or a coalition of machines and people, are driving down rates of error. Currently the AI in self-driving cars is about as safe as a human driver. And it won’t be licensed for general use on the roads until it is much safer than it is now. 

Unfortunately, the source of AI error is generally human – either in the data or in the code. In the data there is always the risk that AI just learns human errors and repeats them, rather than correcting them. In the code, we need to test for edge cases more seriously. For example, in the Uber autonomous vehicle crash the car had been programmed to take into account of a cyclist or a pedestrian, but not a pedestrian pushing a bike.

Test Driven Development is a software method that observes when things go wrong, and writes tests to make sure that the software doesn’t go wrong in that way again. Data scientists are now getting better at applying these techniques to machine learning.

Cognitive transparency

Not only can computers reduce the error rate that we are subject to, but we can have a level of transparency and articulacy about the reasoning of these algorithms that we have never reached in human rationality. 

This predictability and auditability means there is a real opportunity to eliminate most of the sources of human error that have dogged us throughout history.

On balance, I think we should worry less about the robotic future than we should worry about the human fallibility of the present. As we introduce AI into our offices and factories, we should strive for those algorithms to be safe, but we should consider human alternative more critically than we often do.

The evidence suggests the future will remain optimistic, not one filled with indolence and unemployment. The robots are coming to work with us, and they won’t be perfect either, but they will help us to fail in new and more interesting ways. That is, after all, what progress is all about.