Google: We won’t develop deadly AI weapons, but will help the military

Outlines AI principles after fears company was becoming a bit evil

Google CEO Sundar Pichai has vowed that the company will not deploy artificial intelligence for use in deadly weapons and laid out a set of principles the company will follow when developing AI applications.

AI developed by Google will not be used in “weapons or other technologies whose principal purpose or implementation is to cause or directly facilitate injury to people” Pichai wrote.

Pichai also ruled out the company’s AI being used in technologies “that cause or are likely to cause overall harm” except when the “benefits substantially outweigh the risks”. Nor will AI be applied in technologies that gather or use information for surveillance against internationally accepted norms.

However, Pichai said the company would continue to work with the military.

“We want to be clear that while we are not developing AI for use in weapons, we will continue our work with governments and the military in many other areas. These include cybersecurity, training, military recruitment, veterans’ healthcare, and search and rescue,” he wrote. 

The ‘Our principles’ post comes after thousands of Google employees signed a petition demanding the company withdraw from its work with the US Department of Defense to develop computer vision algorithms to analyse drone footage.

The Pentagon contract – Project Maven – was made public by Gizmodo in March, which later reported around a dozen Google staff had resigned in protest.

The petition demanded Google “draft, publicise and enforce a clear policy” that the company would never “build warfare technology”.

“Building this technology to assist the US Government in military surveillance – and potentially lethal outcomes – is not acceptable,” the open letter to Pichai states.

It is unclear whether the principles posted yesterday rule out Google taking on contracts similar to Project Maven. In June, Google Cloud CEO Diane Greene argued of the drone computer vision work that “saving lives was the overarching intent”, perhaps then passing the ‘greater good’ qualifier of the principles. A Google employee reportedly told Gizmodo the principles were "a hollow PR statement".

The announcement was nevertheless applauded by campaign groups hoping to curtail the use of autonomous and AI-driven weapons systems.

The Campaign to Stop Killer Robots, which said it had been in dialogue with Google about the issue, called it a “welcome commitment”.

“Governments should heed this latest expression of tech sector support and start negotiating new international law to ban fully autonomous weapons now,” group coordinator Mary Wareham tweeted.

UNSW Professor of AI Toby Walsh, who leads the anti-autonomous weapons movement in Australia, told Computerworld he was very pleased with the commitment.

“Google is living up to their motto of doing the right thing. We should applaud this very specific, clear and forthright stand on the use of AI in weapons. The pressure is now on Amazon and Facebook and others to follow suit,” he said.

“All these things are a work in progress but this is a very important first step. Google are among the biggest companies working in AI, and have made clear their future is in AI. So for them to respond in this way, it’s a very important precedent for the tech industry as a whole.”


Read more: Android and antitrust: The EU's Google case explained

Join the newsletter!

Or

Sign up to gain exclusive access to email subscriptions, event invitations, competitions, giveaways, and much more.

Membership is free, and your security and privacy remain protected. View our privacy policy before signing up.

Error: Please check your email address.

Tags GoogleAIunswdon't be evilSundar Pichaikiller robotsToby Walsh

More about AmazonAustraliaFacebookGoogleKillerUNSWWareham

Show Comments
[]