Computerworld

Stephen Hawking: Humanity could be ‘infinitely helped’ by AI — or destroyed by it

Potential, perils and pitfalls of artificial intelligence

Stephen Hawking says that over his lifetime he has seen “very significant societal changes,” but the rise of artificial intelligence rates as one of the most significant.

“In short, I believe the rise of powerful AI will be either the best thing or the worst ever to happen to humanity,” the physicist said in a video address originally filmed for GMIC’s Beijing conference earlier this year and also screened today for the GMIC Sydney conference.

“I have to say now that we do not yet know which, but we should do all we can to ensure its future development benefits us and our environment,” Hawking said.

AI research and development is taking place at breakneck speed, and Hawking called for a focus on not just on making AI “more capable” but on “maximising its societal benefit”.

“Everything that civilisation has to offer is a product of human intelligence and I believe there is no real difference between what can be achieved by a biological brain and what can be achieved by a computer,” Hawking said.

“It therefore follows that computers can, in theory, emulate human intelligence and exceed it. But we don’t know. So we cannot know if we will be infinitely helped by AI or ignored by it and sidelined, or conceivably destroyed.”

“While primitive forms of artificial intelligence developed so far have proved very useful, I fear the consequences of creating something that can match or surpass humans,” he said.

AI could “take off on its own” and redesign itself at an ever-increasing pace, while humans remain limited to the slow pace of biological evolution and “would be superseded”.

In addition to significant economic disruption by replacing millions of jobs, in the future advanced AI could develop a “will of its own” that would possibly bring it into conflict with humanity.

Although he is generally an optimist when it comes to humans, Hawking said that he is not as certain about the potential of AI to solve the world’s problems as some.

In 2015, Hawking along with a range of tech luminaries such as Elon Musk signed an open letter issued by the Future of Life Institute that outlined a range of research priorities to help ensure that AI development is a net positive for humanity. The institute is dedicated to mitigating existential risks to humanity.

The letter “called for concrete research on how we could prevent potential problems while also reaping the potential benefits AI offers us and is designed to get AI researchers and developers to pay more attention to AI safety,” Hawking said.

“In addition for policy makers and the general public the letter is meant to be informative but not alarmist”.

“We think it is very important that everybody knows that AI researchers are seriously thinking about these concerns and ethical issues,” Hawking said.

“There is now a broad consensus that AI research is progressing steadily, and that its impact on society is likely to increase,” the letter states.

“The potential benefits are huge, since everything that civilization has to offer is a product of human intelligence; we cannot predict what we might achieve when this intelligence is magnified by the tools AI may provide, but the eradication of disease and poverty are not unfathomable. Because of the great potential of AI, it is important to research how to reap its benefits while avoiding potential pitfalls.”

There are both short-term and long-term considerations relating to AI research, Hawking said.

Some short-term concerns relate to autonomous vehicles including civilian drones and self-driving cars.  “For example a self-driving car may in an emergency have to decide between a small risk of a major accident and a large probability of a small accident,” he said.

“Other concerns relate to lethal intelligent autonomous weapons. Should they be banned? If so, how should autonomy be precisely defined? If not how should culpability for any misuse or malfunction be apportioned?”

In July 2015, Hawking and others signed another open letter backed by the Future of Life Institute that warned of the dangers of autonomous weapons systems.

“Starting a military AI arms race is a bad idea, and should be prevented by a ban on offensive autonomous weapons beyond meaningful human control,” the letter stated.

Hawking said that other short-term concerns include the ability of AI systems to interpret “large surveillance data sets” as well as managing the displacement of jobs by AI.

“Long-term concerns comprise primarily of the potential loss of control of AI systems via the rise of super-intelligences that do not act in accordance with human wishes and that such powerful systems would threaten humanity,” Hawking said.

“Are such dystopic outcomes possible? If so how might these situations arise?”

“What kind of investments in research should be made to better understand and to address the possibility of the rise of a dangerous super-intelligence or the occurrence of an intelligence explosion?”

“Existing tools for harnessing AI such as reinforcement learning and simple utility functions are inadequate to solve this,” he said. “Therefore more research is necessary to find and validate a robust solution to the control problem.

AI as a field is seeing enormous levels of investment, Hawking said.

“The achievements we have seen so far will surely pale against what are coming the decades will bring and we cannot predict what we might achieve when our own minds are amplified by AI,” he said.

“Perhaps with the tools of this new technological revolution we will be able to undo some of the damage done to the natural world by the last one, industrialisation.

“Every aspect of our lives will be transformed. In short, success in creating AI could be the biggest event in the history of our civilisation. But it could also be the last unless we learn how to avoid the risks.”

In October 2016, Hawking opened a new research centre based at the University of Cambridge that is attempting to tackle some of the questions raised by the rapid pace of development in AI.

The Leverhulme Centre for the Future of Intelligence describes its aims as building “a new interdisciplinary community of researchers, with strong links to technologists and the policy world” to “work together to ensure that we humans make the best of the opportunities of artificial intelligence as it develops over coming decades.”

“We spend a great deal of time studying history, which, let's face it, is mostly the history of stupidity,” Hawking said.

“We are aware of the potential dangers but I am at heart an optimist and believe that the potential benefits of creating intelligence are huge,” he said. “Perhaps with the tools of this new technological revolution we will be able to undo some of the damage done to the natural world by industrialisation.”