his concerns that the rise of powerful artificial intelligence (AI) systems could spell the end for humanity. Speaking at the launch of the University of Cambridge’s Centre for the Future of Intelligence on 19 October, he did, however, acknowledge that AI equally has the potential to be one of the best things that could happen to us. By Bradley Love, UCL So are we on the cusp of creating super-intelligent machines that could put humanity at existential risk? There are those who believe that AI will be a boom for humanity, improving health services and productivity as well as freeing us from mundane tasks. However, the most vocal leaders in academia and industry are convinced that the danger of our own creations turning on us is real. For example, Elon Musk, founder of Tesla Motors and SpaceX, has set up a billion-dollar non-profit company with contributions from tech titans, such as Amazon, to prevent an evil AI from bringing about the end of humanity. Universities, such as Berkeley, Oxford and Cambridge have established institutes to address the issue. Luminaries like Bill Joy, Bill Gates and Ray Kurzweil have all raised the alarm. Listening to this, it seems the end may indeed be nigh unless we act before it’s too late.
The role of the tech industry
Or could it be that science fiction and industry-fuelled hype have simply overcome better judgement? The cynic might say that the AI doomsday vision has taken on religious proportions. Of course, doomsday visions usually come with a path to salvation. Accordingly, Kurzweil claims we will be virtually immortal soon through nanobots that will digitise our memories. And Musk recently proclaimed that it’s a near certainty that we are simulations within a computer akin to The Matrix, offering the possibility of a richer encompassing reality where our “programs” can be preserved and reconfigured for centuries.
Underneath the hype
The basic technologies, such as those recently employed by Google’s DeepMind to defeat a human expert at the game Go, are simply refinements of technologies developed in the 1980s. There have been no qualitative breakthroughs in approach. Instead, performance gains are attributable to larger training sets (also known as big data) and increased processing power. What is unchanged is that most machine systems work by maximising some kind of objective. In a game, the objective is simply to win, which is formally defined (for example capture the king in chess). This is one reason why games (checkers, chess, Go) are AI mainstays – it’s easy to specify the objective function. In other cases, it may be harder to define the objective and this is where AI could go wrong. However, AI is more likely to go wrong for reasons of incompetence rather than malice. For example, imagine that the US nuclear arsenal during the Cold War was under control of an AI to thwart sneak attack by the Soviet Union. Due to no action of the Soviet Union, a nuclear reactor meltdown occurs in the arsenal and the power grid temporarily collapses. The AI’s sensors detect the disruption and fallout, leading the system to infer an attack is underway. The president instructs the system in a shaky voice to stand down, but the AI takes the troubled voice as evidence the president is being coerced. Missiles released. End of humanity. The AI was simply following its programming, which led to a catastrophic error. This is exactly the kind of deadly mistakes that humans almost made during the Cold War. Our destruction would be attributable to our own incompetence rather than an evil AI turning on us – no different than an auto-pilot malfunctioning on a jumbo jet and sending its unfortunate passengers to their doom. In contrast, human pilots have purposefully killed their passengers, so perhaps we should welcome self-driving cars.

- Bradley Love, Professor of Cognitive and Decision Sciences, UCL
- This article was originally published on The Conversation. Read the original article.