AI researchers say Elon Musk's fears 'not completely crazy'

High-tech entrepreneur Elon Musk made headlines when he said artificial intelligence research is a danger to humanity, but researchers from some of the top U.S. universities say he's not so far off the mark.

High-tech entrepreneur Elon Musk made headlines when he said artificial intelligence research is a danger to humanity, but researchers from some of the top U.S. universities say he's not so far off the mark.

"At first I was surprised and then I thought, 'this is not completely crazy,' " said Andrew Moore, dean of the School of Computer Science at Carnegie Mellon University. "I actually do think this is a valid concern and it's really an interesting one. It's a remote, far future danger but sometime we're going to have to think about it. If we're at all close to building these super-intelligent, powerful machines, we should absolutely stop and figure out what we're doing."

Musk, most well-known as the CEO of electric car maker Tesla Motors, and CEO and co-founder of SpaceX , caused a stir after he told an audience at an MIT symposium that artificial intelligence (AI), and research into it, poses a threat to humans.

"I think we should be very careful about artificial intelligence," Musk said when answering a question about the state of AI. "If I were to guess at what our biggest existential threat is, it's probably that... With artificial intelligence, we are summoning the demon. In all those stories with the guy with the pentagram and the holy water, and he's sure he can control the demon. It doesn't work out."

He added that there should be regulatory oversight -- at the national and international level -- to "make sure we don't do something very foolish."

Musk's comments came after he tweeted in early August that AI is "potentially more dangerous than nukes."

His comments brought images of movies like The Terminator and Battlestar Galactica to mind. The science-fiction robots, stronger and more adaptable than humans, threw off their human-imposed shackles and turned on people.

The statements come from the man who founded Tesla Motors, a company that has developed an Autopilot feature for its dual-motor Model S sedan. The Autopilot software is designed to enable the car to steer to stay within a lane and manage speed by reading road signs.

Analysts and scientists disagree on whether this is artificial intelligence. Some say it's not quite AI technology but is a step in that direction, while others say the autonomy aspect of it goes into the AI bucket.

Last month, Musk, along with Facebook co-founder Mark Zuckerberg and actor and entrepreneur Ashton Kutcher, teamed to make a $40 million investment in Vicarious FPC, a company that claims to be building the next generation of AI algorithms.

Musk told a CNN.com reporter that he made the investment "to keep an eye" on AI researchers.

For Sonia Chernova, director of the Robot Autonomy and Interactive Learning lab in the Robotics Engineering Program at Worcester Polytechnic Institute, it's important to delineate between different levels of artificial intelligence.

"There is a concern with certain systems, but it's important to understand that the average person doesn't understand how prevalent AI is," Chernova said.

She noted that AI research is used in email to filter out spam. Google uses it for its Maps service, and apps that make movie and restaurant recommendations also use it.

"There's really no risk there," Chernova said. "I think [Musk's] comments were very broad and I really don't agree there. His definition of AI is a little more than what we really have working. AI has been around since the 1950s. We're now getting to the point where we can do image processing pretty well, but we're so far away from making anything that can reason."

She said researchers might be as much as 100 years from building an intelligent system.

Other researchers disagree on how far they might be from creating a self-aware, intelligent machine. At the earliest, it might be 20 years away, or 50 or, even 100 years away.

The one point they agree on is that it's not happening tomorrow.

However, that doesn't mean we shouldn't be thinking about how to handle the creation of sentient systems now, said Yaser Abu-Mostafa , professor of electrical engineering and computer science at the California Institute of Technology.

Scientists today need to focus on creating systems that humans will always be able to control.

"Having a machine that is evil and takes over... that cannot possibly happen without us allowing it," said Abu-Mostafa. "There are safeguards... If you go through the scenario of a machine that wants to take over or destroy the world, it's a nice science-fiction scenario, as long as we don't allow a system to control itself."

He added that some concern about AI is justified.

"Take nuclear research. Clearly it's very dangerous and can lead to great harm but the danger is in the use of the results not in the research itself," Abu-Mostafa said. "You can't say nuclear research is bad so you shouldn't do it. The idea is to do the research and understand the facts and then have controls in place so the research is not abused. If we don't do the research, others will do the research."

The nuclear research program offers another lesson, according to Stuart Russell, a professor of electrical engineering and computer science at the University of California Berkeley.

Russell, who focuses his research on robotics and artificial intelligence, said, that like other fields, AI researchers have to take risk into account because there is risk involved maybe not today but likely some day.

"The underlying point [Musk] is making is something that dozens of people have made since the 1960s," Russell said. "If you build machines that are more intelligent than people, you might not be able to control them. Sci-fi says they might develop some evil intent or they might develop a consciousness. I don't see that being an issue, but there are things we don't have a good handle on."

For instance, Russell noted that as machines become more intelligent and more capable, they simultaneously need to understand human values so when they're acting on humans' behalf, they don't harm people.

The Berkeley scientist wants to make sure that AI researchers consider this as they move forward. He's communicating with students about it, organizing workshops and giving talks.

"We have to start thinking about the problem now," Russell said. "When you think nuclear fusion research, the first thing you think of is containment. You need to get energy out without creating a hydrogen bomb. The same would be true for AI. If we don't know how to control AI... it would be like making a hydrogen bomb. They would be much more dangerous than they are useful."

To create artificial intelligence safely, Russell said researchers need to begin having the necessary discussions now.

"If we can't do it safely, then we shouldn't do it," he said. "We can do it safely, yes. These are technical, mathematical problems and they can be solved but right now we don't have that solution."

Join the newsletter!

Or

Sign up to gain exclusive access to email subscriptions, event invitations, competitions, giveaways, and much more.

Membership is free, and your security and privacy remain protected. View our privacy policy before signing up.

Error: Please check your email address.

Tags popular scienceemerging technologySpaceXTeslaTesla Motors

More about AutonomyCNNFacebookGoogleInteractiveMellonMITTechnology

Show Comments
[]