A pledge against the use of killer robots, drones and other military weapons that once activated make decisions on their own. The Lethal Autonomous Weapons Pledge was signed in 2018 by more than two thousand signatories including Demis Hassabis of Google DeepMind and Elon Musk of SpaceX. The basic premise is that a machine cannot make the decision to take human life.
Although the U.S. military was not asked to sign this pledge, it has a similar directive that any system designed to take human life must be given by humans or that humans can intervene and override autonomous decisions made by weapons when necessary.
What If the Other Side Does Not Agree
The dilemma is that if an enemy does not abide by the principles in this pledge, what is the point? If their machines can autonomously destroy human life and do it faster than humans can respond, would it not be prudent to counter their intelligence? Even with the pledge being made, it was agreed that it might someday be rescinded.
Although there continues to be calls to develop and approve such a ban, as of 2024, there was no international agreement banning autonomous weapons. See
lethal autonomous weapons,
AI anxiety and
smart weapon.