While we know it’s not been the best week for Elon Musk, it is a good week for the future of mankind, as the Tesla founder has signed a pledge not to participate in or support developing lethal autonomous weapons: i.e. killer robots.
The pledge also includes the founder of Skype, the founders of DeepMind (Google’s AI research team), a range of experts and leaders in the industry. Over 170 organisations and 2,464 people signed the pledge, fronted by the Future of Life Institute.
“There is a moral component to this position, that we should not allow machines to make life-taking decisions for which others – or nobody – will be culpable,” the pledge reads. “There is also a powerful pragmatic argument: lethal autonomous weapons, selecting and engaging targets without human intervention, would be dangerously destabilising for every country and individual.”
Musk has spoken openly about his reservations surrounding the future of AI and autonomous weaponry, calling it an “existential threat to humanity”. He founded OpenAI, a non-profit organisation that explores safer AI research – recently bots created by OpenAI experts beat its human opponents in video games for the first time.
The group of signatories on this new pledge call for governments worldwide to regulate and restrict the use of such machines, with the specific fear that it would provoke a deadly arms race and war unlike any we’ve known before. “The decision to talk human life should never be delegated to a machine,” it continues. The pledge outlines the moral issues that come with autonomous weapons, and that other technologies like surveillance and data systems – which come with their own set of issues – would have to be developed to match.