Elon Musk, DeepMind Co-Founders and other AI Leaders Pledge Against Developing Autonomous Weapons

Aishwarya Singh Last Updated : 07 May, 2019
2 min read

Overview

  • Top AI leaders, including Elon Musk and 3 co-founders from DeepMind have taken a pledge not to develop any weapons using AI
  • Weapons which can select and engage target without human supervision will be considered as autonomous weapons
  • More than 2,400 AI researchers and top AI leaders have signed the pledge

 

Introduction

In the past few years, we have seen artificial intelligence come a long way. From autonomous robots to AI bots that can talk like humans, knowingly and unknowingly we have created a whole new world which can work (mostly) without human assistance. It’s an exciting yet scary time to be alive!

Top AI researches have taken a step forward to ensure that does not happen anytime soon. Some of the top AI leaders, including Elon Musk, three co-founders of Google’s DeepMind, and Stuart Russell, have pledged to not develop any AI autonomous weapons. This will include any AI system that has the ability to ‘select’ and ‘engage’ targets without human supervision.

The pledge was published on 18th July 2018 at the International Joint Conference on Artificial Intelligence (IJCAI) in Stockholm. The conference was organized by the Future of Life Institute concerned with the growth of lethal autonomous weapons. Thousands of researchers unanimously agreed that machines should not be allowed to make decisions pertaining to life and death. More than 2,400 researchers and around 160 companies have signed the pledge.

Until today, no hard limits were imposed on the development of AI for military. The technology for developing AI weapons has taken good shape and various attempts supporting regulation of autonomous weapons have proven to be highly ineffective. Recently, Google was in the throes of a controversy after several employees resigned when it was revealed that the company was working to assist with building weapons using AI. The aim is to discourage the development of AI-enhanced killer robots.

 

Our take on this

This is a much needed step, taken by the researchers and AI practitioners. The upcoming technologies and development of autonomous robots do amaze us, but also leave a question in our minds about our safety and ethics. Recently, we saw an AI system that can dream on its own, and another bot that could talk exactly like humans, and this possibilities are tantalizing and endless.

This act will help draw a line which should help in regulating AI and it’s use for the wrong reasons.

 

Subscribe to AVBytes here to get regular data science, machine learning and AI updates in your inbox!

 

An avid reader and blogger who loves exploring the endless world of data science and artificial intelligence. Fascinated by the limitless applications of ML and AI; eager to learn and discover the depths of data science.

Responses From Readers

Clear

Congratulations, You Did It!
Well Done on Completing Your Learning Journey. Stay curious and keep exploring!

We use cookies essential for this site to function well. Please click to help us improve its usefulness with additional cookies. Learn about our use of cookies in our Privacy Policy & Cookies Policy.

Show details