The Future of Intelligence: The potential and the risks of artificial intelligence

September 2, 2015

This blog post contains the views of two speakers from our recent event The Future of Intelligence discussing the potential and risks that the greater use of artificial intelligence will bring.

Michael Osborne: Machine Learning and the Future of Work

Machine learning is the study of algorithms that can learn and act. Why use a machine when we already have over six billion humans to choose from? One reason is that algorithms are significantly cheaper, and becoming ever more so.  But just as important, algorithms can often do better than humans, avoiding the biases that taint human decision making.

There are big benefits to be gained from the rise of the algorithms. Big data is already leading to programs that can handle increasingly sophisticated tasks, such as translation. Computational health informatics will transform health monitoring, allowing us to release patients from their hospital beds much earlier and freeing up resources in the NHS. Self-driving cars will allow us to cut down on the 90% of traffic accidents caused by human error, while the data generated by their constant monitoring of the impact will have big consequences for mapping, insurance, and the law.

Nevertheless, there will be big challenges from the disruption automation creates. New technologies derived from mobile machine learning and robotics threaten employment in logistics, sales and clerical occupations.  Over the next few decades, 47% of jobs in America are at high risk of automation, and 35% of jobs in the UK. Worse, it will be the already vulnerable who are most at risk, while high-skilled jobs are relatively resistant to computerisation. New jobs are emerging to replace the old, but only slowly – only 0.5% of the US workforce is employed in new industries created in the 21st century.

Policy makers are going to have to do more to ensure that we can all share in the great prosperity promised by technology.

Stuart Russell: Killer Robots, the End of Humanity, and All That: Policy Implications

Everything civilisation offers is the product of intelligence. If we can use AI to amplify our intelligence, the benefits to humanity are potentially immeasurable.

The good news is that progress is accelerating. Solid theoretical foundations, more data and computing, and huge investments from private industry have created a virtuous cycle of iterative improvement. On the current trajectory, further real-world impact is inevitable.

Of course, not all impact is good. As technology companies unveil ever more impressive demos, newspapers have been full of headlines warning of killer robots, or the loss of half of all jobs, or even the end of humanity. But how credible exactly are these nightmare scenarios? The short answer is we should not panic, but there are real risks that are worth taking seriously.

In the short term, lethal autonomous weapons or weapon systems that can select and fire upon targets on their own are worth taking seriously. According to defence experts, including the Ministry of Defence, these are probably feasible now, and they have already been the subject of three UN meetings in 2014-5. In the future, they are likely to be relatively cheap to mass produce, potentially making them much harder to control or contain than nuclear weapons. A recent open letter from 3,000 AI researchers argued for a total ban on the technology to prevent the start of a new arms race.

Looking further ahead, what, however, if we succeed in creating an AI system that can make decisions as well, or even significantly better than humans? The first thing to say is that we are several conceptual breakthroughs away from constructing such a general artificial intelligence, as compared to the more specific algorithms needed for an autonomous weapon or self-driving car.

It is highly unlikely that we will be able to create such an AI within the next five to ten years, but then conceptual breakthroughs are by their very nature hard to predict. The day before Leo Szilard conceived of neutron-induced chain reactions, the key to nuclear power, Lord Rutherford was claiming that, “anyone who looks for a source of power in the transformation of the atoms is talking moonshine.”

The danger from such a “superintelligent” AI is that it would not by default share the same goals as us. Even if we could agree amongst ourselves what the best human values were, we do not understand how to reliably formalise them into a programme. If we accidentally give a superintelligent AI the wrong goals, it could prove very difficult to stop.

For example, for many benign-sounding final goals we might try to give the computer, two plausible intermediate goals for an AI are to gain as many physical resources as possible and to refuse to allow itself to be terminated. We might think that we are just asking the machine to calculate as many digits of pi as possible, but it could judge that the best way to do so is turn the whole Earth into a supercomputer.

In short, we are moving with increasing speed towards what could be the biggest event in human history. Like global warming, there are significant uncertainties involved and the pain might not come for another few decades – but equally like global warming, the sooner we start to look at potential solutions the more likely we are to be successful. Given the complexity of the problem, we need much more technical research on what an answer might look like.

The rise of AI has the potential to change our economy and society dramatically within a relatively short period of time. Policy Exchange will therefore be returning to this subject regularly in the coming months.

Join our mailing list