Google Bars Its AI Tech in Weapons, Surveillance 

“We want to be clear that while we are not developing AI for use in weapons, we will continue our work with governments and the military in many other areas. These include cybersecurity, training, military recruitment, veterans’ healthcare, and search and rescue.”

Google Bars Its AI Tech in Weapons, Surveillance 

At its heart, AI (artificial intelligence) is computer programming that learns and adapts. It can’t solve every problem, but its potential to improve our lives is profound.

Google use AI to make products more useful—from email that’s spam-free and easier to compose, to a digital assistant you can speak to naturally, to photos that pop the fun stuff out for you to enjoy.

“We recognize that such powerful technology raises equally powerful questions about its use,” Google CEO Sundar Pichai said in the company’s blog.

How AI is developed and used will have a significant impact on society for many years to come.

“As a leader in AI, we feel a deep responsibility to get this right. So today, we’re announcing seven principles to guide our work going forward,” he explains.

“These are not theoretical concepts; they are concrete standards that will actively govern our research and product development and will impact our business decisions.

Smart factories just got smarter

Engineer using laptop computer maintenance automatic robotic hand machine tool in automotive industry manufacturing factory. Suwin / Shutterstock.com

Google said its AI applications we will not pursue, design or deploy AI in the following application areas:

  • Technologies that cause or are likely to cause overall harm.  Where there is a material risk of harm, we will proceed only where we believe that the benefits substantially outweigh the risks and will incorporate appropriate safety constraints.
  • Weapons or other technologies whose principal purpose or implementation is to cause or directly facilitate injury to people.
  • Technologies that gather or use information for surveillance violating internationally accepted norms.
  • Technologies whose purpose contravenes widely accepted principles of international law and human rights.

“We want to be clear that while we are not developing AI for use in weapons, we will continue our work with governments and the military in many other areas. These include cybersecurity, training, military recruitment, veterans’ healthcare, and search and rescue,” said Pichai.

“These collaborations are important, and we’ll actively look for more ways to augment the critical work of these organizations and keep service members and civilians safe. We believe these principles are the right foundation for our company and the future development of AI.”

COMMENTS

WORDPRESS: 0
DISQUS: 0
%d bloggers like this: