Google rules out using artificial intelligence for weapons
SAN FRANCISCO
Google said on June 7 that it would not use artificial intelligence for weapons or to “cause or directly facilitate injury to people,” as it unveiled a set of principles for the technologies.
Chief executive Sundar Pichai, in a blog post outlining the company’s artificial intelligence policies, noted that even though Google won’t use AI for weapons, “we will continue our work with governments and the military in many other areas” such as cybersecurity, training, or search and rescue.
The news comes with Google facing an uproar from employees and others over a contract with the U.S. military, which the California tech giant said last week would not be renewed.
Pichai set out seven principles for Google’s application of artificial intelligence, or advanced computing that can simulate intelligent human behavior.
He said Google is using AI “to help people tackle urgent problems” such as prediction of wildfires.
Google will avoid the use of any technologies “that cause or are likely to cause overall harm,” Pichai wrote.
That means steering clear of “weapons or other technologies whose principal purpose or implementation is to cause or directly facilitate injury to people” and systems “that gather or use information for surveillance violating internationally accepted norms.”
Google also will ban the use of any technologies “whose purpose contravenes widely accepted principles of international law and human rights,” Pichai said.