San Francisco: After facing backlash over its involvement in an Artificial Intelligence (AI)-powered Pentagon project “Maven”, Google CEO Sundar Pichai has emphasised that the company will not work on technologies that cause or are likely to cause overall harm.
About 4,000 Google employees had signed a petition demanding “a clear policy stating that neither Google nor its contractors will ever build warfare technology”.
Following the anger, Google decided not to renew the “Maven” AI project with the US Defence Department after it expires in 2019.
“We will not design or deploy AI in weapons or other technologies whose principal purpose or implementation is to cause or directly facilitate injury to people,” Pichai said in a blog post late Thursday.
“We will not pursue AI in “technologies that gather or use information for surveillance violating internationally accepted norms,” the Indian-born CEO added.
“We want to be clear that while we are not developing AI for use in weapons, we will continue our work with governments and the military in many other areas like cybersecurity, training, military recruitment, veterans’ healthcare, and search and rescue,” Pichai noted.
Google will incorporate its privacy principles in the development and use of its AI technologies, providing appropriate transparency and control over the use of data, Pichai enphasised.
In a blog post describing seven “AI principles”, he said these are not theoretical concepts but “concrete standards that will actively govern our research and product development and will impact our business decisions”.
“How AI is developed and used will have a significant impact on society for many years to come. As a leader in AI, we feel a deep responsibility to get this right,” Pichai posted.
“We will seek to avoid unjust impacts on people, particularly those related to sensitive characteristics such as race, ethnicity, gender, nationality, income, sexual orientation, ability, and political or religious belief,” Pichai noted.
Pichai said Google will design AI systems to be appropriately cautious, and seek to develop them in accordance with best practices in AI safety research.
“We will design AI systems that provide appropriate opportunities for feedback, relevant explanations, and appeal. Our AI technologies will be subject to appropriate human direction and control,” he added.
Fetched from- Gadgets Now