Google Bars Uses of its Artificial Intelligence Tech in Weapons

Google will not allow its artificial intelligence software to be used in weapons or unreasonable surveillance efforts under new standards for its business decisions in the nascent field, the Alphabet unit said Thursday.

The restriction could help Google management defuse months of protest by thousands of employees against the company’s work with the U.S. military to identify objects in drone video.

Google instead will seek government contracts in areas such as cybersecurity, military recruitment and search and rescue, Chief Executive Sundar Pichai said in a blog post Thursday.

“We want to be clear that while we are not developing AI for use in weapons, we will continue our work with governments and the military in many other areas,” he said.

Breakthroughs in the cost and performance of advanced computers have carried AI from research labs into industries such as defense and health in the last couple of years. Google and its big technology rivals have become leading sellers of AI tools, which enable computers to review large datasets to make predictions and identify patterns and anomalies faster than humans could.

But the potential of AI systems to pinpoint drone strikes better than military specialists or identify dissidents from mass collection of online communications has sparked concerns among academic ethicists and Google employees.

A Google official, requesting anonymity to discuss the sensitive issue, said the company would not have joined the drone project last year had the principles already been in place. The work comes too close to weaponry, even though the focus is on non-offensive tasks, the official said Thursday.

Google plans to honor its commitment to the project through next March, a person familiar with the matter said last week.

More than 4,600 employees petitioned Google to cancel the deal sooner, with at least 13 employees resigning in recent weeks in an expression of concern.

A nine-employee committee drafted the AI principles, according to an internal email seen by Reuters.

The Google official described the principles as a template that any software developer could put into immediate use. Though Microsoft and others released AI guidelines earlier, the AI community has followed Google’s efforts closely because of the internal pushback against the drone deal.

Google’s principles

Google’s principles say it will not pursue AI applications intended to cause physical injury, that tie into surveillance “violating internationally accepted norms of human rights,” or that present greater “material risk of harm” than countervailing benefits.

“The clear statement that they won’t facilitate violence or totalitarian surveillance is meaningful,” University of Washington technology law professor Ryan Calo tweeted Thursday.

Google also called on employees and customers developing AI “to avoid unjust impacts on people,” particularly around race, gender, sexual orientation, and political or religious belief.

The company recommended that developers avoid launching AI programs likely to cause significant damage if attacked by hackers because existing security mechanisms are unreliable.

Pichai said Google reserved the right to block applications that violated its principles. The Google official acknowledged that enforcement would be difficult because the company cannot track each use of its tools, some of which can be downloaded free of charge and used privately.

Google’s decision to restrict military work has inspired criticism from members of Congress. Representative Pete King, a New York Republican, tweeted Thursday that Google not seeking to extend the drone deal “is a defeat for U.S. national security.”

your ad here

Google Bars Uses of its Artificial Intelligence Tech in Weapons

Google will not allow its artificial intelligence software to be used in weapons or unreasonable surveillance efforts under new standards for its business decisions in the nascent field, the Alphabet unit said Thursday.

The restriction could help Google management defuse months of protest by thousands of employees against the company’s work with the U.S. military to identify objects in drone video.

Google instead will seek government contracts in areas such as cybersecurity, military recruitment and search and rescue, Chief Executive Sundar Pichai said in a blog post Thursday.

“We want to be clear that while we are not developing AI for use in weapons, we will continue our work with governments and the military in many other areas,” he said.

Breakthroughs in the cost and performance of advanced computers have carried AI from research labs into industries such as defense and health in the last couple of years. Google and its big technology rivals have become leading sellers of AI tools, which enable computers to review large datasets to make predictions and identify patterns and anomalies faster than humans could.

But the potential of AI systems to pinpoint drone strikes better than military specialists or identify dissidents from mass collection of online communications has sparked concerns among academic ethicists and Google employees.

A Google official, requesting anonymity to discuss the sensitive issue, said the company would not have joined the drone project last year had the principles already been in place. The work comes too close to weaponry, even though the focus is on non-offensive tasks, the official said Thursday.

Google plans to honor its commitment to the project through next March, a person familiar with the matter said last week.

More than 4,600 employees petitioned Google to cancel the deal sooner, with at least 13 employees resigning in recent weeks in an expression of concern.

A nine-employee committee drafted the AI principles, according to an internal email seen by Reuters.

The Google official described the principles as a template that any software developer could put into immediate use. Though Microsoft and others released AI guidelines earlier, the AI community has followed Google’s efforts closely because of the internal pushback against the drone deal.

Google’s principles

Google’s principles say it will not pursue AI applications intended to cause physical injury, that tie into surveillance “violating internationally accepted norms of human rights,” or that present greater “material risk of harm” than countervailing benefits.

“The clear statement that they won’t facilitate violence or totalitarian surveillance is meaningful,” University of Washington technology law professor Ryan Calo tweeted Thursday.

Google also called on employees and customers developing AI “to avoid unjust impacts on people,” particularly around race, gender, sexual orientation, and political or religious belief.

The company recommended that developers avoid launching AI programs likely to cause significant damage if attacked by hackers because existing security mechanisms are unreliable.

Pichai said Google reserved the right to block applications that violated its principles. The Google official acknowledged that enforcement would be difficult because the company cannot track each use of its tools, some of which can be downloaded free of charge and used privately.

Google’s decision to restrict military work has inspired criticism from members of Congress. Representative Pete King, a New York Republican, tweeted Thursday that Google not seeking to extend the drone deal “is a defeat for U.S. national security.”

your ad here

leave a reply: