“Lethal autonomous weapons threaten to become the third revolution in warfare [after gunpowder and nuclear weapons]. Once developed, they will permit armed conflict to be fought at a scale greater than ever, and at time scales faster than humans can comprehend."
"These can be weapons of terror, weapons that despots and terrorists use against innocent populations, and weapons hacked to behave in undesirable ways.”
This dire warning
comes from a 2017 open letter signed by 115 tech experts, including SpaceEx CEO Elon Musk and Alphabet’s artificial intelligence expert, Mustafa Suleyman.
Countries with high-tech militaries, particularly the United States, China, Israel, South Korea, Russia, and the United Kingdom are continuing to develop autonomous systems for military applications.
At present, close to 381 partly autonomous systems have been deployed or are being developed in 12 countries.
What is a fully autonomous weapons system?
In a fully autonomous weapons system (AWS), there is no significant human input over critical decisions, such as the decision to target and kill people. After being programmed, the weapons system takes action on its own.
Even when the system is not yet fully autonomous, the push toward autonomy can render human control essentially meaningless.
What are the legal and ethical concerns?
Because humanitarian law was created to be applied to human beings, it is not at all clear who would be held legally responsible in the event of an attack by an autonomous weapons system—the manufacturer, programmer, commander, robot itself.
Additionally, moral and ethical questions arise with the use of an autonomous weapons system:
- What are appropriate levels of human involvement?
- How much human control is being exercised over critical decisions such as targeting and killing?