For the second time in 12 months, technology industry leaders have signed a pledge not to participate in the manufacture, trade or use of lethal autonomous weapons.
The latest pledge was signed by 150 companies and more than 2,400 people from 90 countries at the 2018 International Joint Conference on Artificial Intelligence in Stockholm, Sweden.
The list is a ‘who’s who’ of CEOs, engineers and scientists from the tech industry – including Google DeepMind, the XPRIZE Foundation and Elon Musk – signed the pledge organised by the Future of Life Institute.
Almost a year ago to the day, AI and robotics experts signed an open letter to the United Nations to halt the use of autonomous weapons they say threaten a ‘third revolution in warfare.’
LAWS or ‘killer robots’ are weapons that can identify, target and kill a person without a human ‘in the loop.’ That is, no person makes the final decision to authorise lethal force: the decision and authorisation about whether or not someone will die is left to the autonomous weapons system. This does not include current drones, which are under human control; nor autonomous systems that merely defend against other weapons.
The University of New South Wales’ professor of artificial intelligence, Toby Walsh highlighted the ethical issues surrounding lethal autonomous weapons systems (LAWS): “We cannot hand over the decision as to who lives and how dies to machines. The do not have the ethics to do so. I encourage you and your organisations to pledge to ensure that war does not become more terrible in this way.”
Walsh was also part of a group of Australian researchers in robotics and AI who last November called on prime minster Malcolm Turnbull to take a stand against weaponising AI. Researchers signed a letter asking the Turnbull government to become the 20th country to call for ban on lethal weapons at a forthcoming United Nations Conference on the Convention on Certain Conventional Weapons.
Full text of the pledge signed today is below:
Artificial intelligence (AI) is poised to play an increasing role in military systems. There is an urgent opportunity and necessity for citizens, policymakers, and leaders to distinguish between acceptable and unacceptable uses of AI.
In this light, we the undersigned agree that the decision to take a human life should never be delegated to a machine. There is a moral component to this position, that we should not allow machines to make life-taking decisions for which others – or nobody – will be culpable.
There is also a powerful pragmatic argument: lethal autonomous weapons, selecting and engaging targets without human intervention, would be dangerously destabilizing for every country and individual. Thousands of AI researchers agree that by removing the risk, attributability, and difficulty of taking human lives, lethal autonomous weapons could become powerful instruments of violence and oppression, especially when linked to surveillance and data systems.
Moreover, lethal autonomous weapons have characteristics quite different from nuclear, chemical and biological weapons, and the unilateral actions of a single group could too easily spark an arms race that the international community lacks the technical tools and global governance systems to manage. Stigmatizing and preventing such an arms race should be a high priority for national and global security.
We, the undersigned, call upon governments and government leaders to create a future with strong international norms, regulations and laws against lethal autonomous weapons. These currently being absent, we opt to hold ourselves to a high standard: we will neither participate in nor support the development, manufacture, trade, or use of lethal autonomous weapons. We ask that technology companies and organizations, as well as leaders, policymakers, and other individuals, join us in this pledge.