2-min read

Elon Musk And Google DeepMind Signing Pledge against AI Weapons is a Step Towards Ethical AI

Top tech companies and individuals pledge to ensure that their artificial intelligence technology will not be used for making autonomous weapons.

Vishal Mathur |

Updated:July 18, 2018, 3:00 PM IST
Elon Musk And Google DeepMind Signing Pledge against AI Weapons is a Step Towards Ethical AI
Elon Musk . Representative Image. (Image: REUTERS/Stringer/Files)
Top tech personalities and companies, including Elon Musk and Google’s AI subsidary DeepMind have signed a pledge to not make autonomous and artificially intelligent (AI) weapons. The pledge has been signed at the 2018 International Joint Conference on Artificial Intelligence (IJCAI) in Stockholm, organized by the Future of Life Institute. The research institute aims to “mitigate existential risk” to humanity. More than 2,400 individuals and 160 companies have signed the pledge, including Tesla’s Elon Musk and DeepMind co-founders Demis Hassabis, Shane Legg and Mustafa Suleyman.

“There is a moral component to this position, that we should not allow machines to make life-taking decisions for which others – or nobody – will be culpable. There is also a powerful pragmatic argument: lethal autonomous weapons, selecting and engaging targets without human intervention, would be dangerously destabilizing for every country and individual,” says the pledge.

The pledge comes as a handful of companies faced backlash over their technologies being used by various government agencies and law enforcement groups, for weapons. Google in particular came under fire for its Project Maven Pentagon contract, which provided AI tech to the US military helping them flag drone images that would require additional human review, and improve the targeting capabilities for drone strikes. In April, thousands of Google employees, including dozens of senior engineers, signed a letter protesting the company’s involvement in the Pentagon program—they had called it the “Business of War”.

"Thousands of AI researchers agree that by removing the risk, attributability, and difficulty of taking human lives, lethal autonomous weapons could become powerful instruments of violence and oppression, especially when linked to surveillance and data systems," says the pledge and adds, "Moreover, lethal autonomous weapons have characteristics quite different from nuclear, chemical and biological weapons, and the unilateral actions of a single group could too easily spark an arms race that the international community lacks the technical tools and global governance systems to manage. Stigmatizing and preventing such an arms race should be a high priority for national and global security."

Even before the signing of this pledge, tech companies had been taking steps to ensure they stay within the boundaries of ethics with regards to the AI tech they develop and share.

Google has already released its own set of principles, the purpose of which is guide the company's ethics on AI technology. Google CEO Sundar Pichai laid out the company’s stance saying, “From now on, it won't design or deploy AI for weapons, surveillance purposes or technology "whose purpose contravenes widely accepted principles of international law and human rights.”

Microsoft has also clarified that it’s work with Immigration and Customs Enforcement (ICE) in the US is limited to email, calendar, messaging and document management, and doesn't include any facial recognition technology. This came after the company faced backlash. Microsoft has also implemented a set of guiding principles for the company’s facial recognition work.

Also Watch

| Edited by: ---
Read full article