Nigel Shadbolt, professor of artificial intelligence at the University of Southampton and cofounder of the Open Data Institute, will be debating the future of AI [i]on 24 May atHowTheLightGetsIn, a philosophy and music festival. WIRED is a festival media partner.[/i]
I once heard artificial intelligence described as making computers that behave like the ones in the movies. Hollywood’s computers and robots are invariably mad, bad and dangerous to know. So when we are warned about the emerging threat of AI to the future of humanity, people get alarmed.
There are plenty of things to be alarmed about in the modern world, from climate change to terrorism, Ebola outbreaks to nuclear reactors melting down. However, I am pretty relaxed about the threat from AI and actually very excited about what future developments hold.
AI isn’t poised to decide that we are superfluous to requirements and that the planet would be better off without us. To believe that is to misunderstand the progress we have made in AI and to forget a basic feature of human design. The rise and rise of computing power has brought huge advances in our ability to solve problems that previously seemed beyond machines. Examples include speech recognition, language translation, defeating human chess champions, systems taking part in quiz shows, vehicle navigation and control. The list goes on. However, the characteristic of this success is what I call the combination of brute force and a little insight.
When Gary Kasparov was beaten by an AI chess-playing programme it happened because of the ability of the machine to review hundreds of thousands of moves deep into the game — brute force. The program had a large library of stored moves from openings through to end games — brute force. The program was able to apply weightings to the relative goodness of the moves — a little insight. The result was a defeat for Kasparov.
A couple of decades earlier we had been told that if ever a computer beat a human world chess champion the age of intelligent machines would have arrived. One did, and it hadn’t. And it hasn’t arrived today, when Deepmind’s latest learning algorithm allow computers to learn to play at super human levels in arcade games.
What we do have are programs and machines that are exquisitely designed to particular tasks — achieving a level of performance through brute force and insight that makes them look smart. And that brings us to that basic feature of human design. We have evolved such that we are disposed to see intelligence, other minds and agency in the world around us. A three-year-old child tows a piece of paper around and treats it as their pet. MIT students sitting entranced by a crude humanoid robot able to raise its eyebrows, effect a frown or look sad by implementing a simple set of rules displaying responses that were triggered by the behaviour of the student. Or back to Kasparov, who was convinced that the programme that beat him at chess was reading his mind.
The reality is no less exciting. What we are witnessing is the emergence of Ambient Intelligence and Augmented Intelligence. These others senses to the abbreviation AI have been brought about by the encapsulation and compilation into devices — the Internet of Things and the Web of narrow components of Artificial Intelligence — programs that are crafted to a particular task and niche.
Augmented Intelligence occurs when we use our global infrastructure of data and machine processing connecting thousands and millions of humans. Witness the recent response to the Nepal earthquake. This is a genuinely exciting prospect where we solve problems together that are beyond any one individual or organisation.
In the meantime there is no doubt we should take the ethics of AI very seriously. There is danger from complex systems with AI algorithms at their heart. The danger is not having humans in the loop to monitor, and if necessary override the system’s behaviour. It is not having clear rules of engagement, built into these systems; principles that embody the highest levels of respect for human life and safety.
Isaac Asimov, who saw the future better than most, gave us a first law of robotics — a robot may not injure a human being or, through inaction, allow a human being to come to harm. This is a great principle to compile into any complex system.