The Pentagon’s alleged ‘Killer Robots’ are just a fantasy


SUMMARY
The U.S. Department of Defense just outlined its strategy to reap the benefits of artificial intelligence and the result is less Terminator and more Alexa. Rather than deploying a weapon that knows how to target and kill America's enemies, the Pentagon's AI strategy wants to focus on more mundane things here at home rather than blowing up exotic locations abroad.
That still requires a human at the stick.
The autonomous weapons that the Pentagon would very much like to develop one day aren't going to be all that autonomous anyway. Even the latest artificial intelligence directives state that commanders will exercise the appropriate amount of human judgment in the military's use of force.
According to the DoD's report, there are five key points the Pentagon wants to address, each more boring than the last:
- Delivering AI-enabled capabilities that address key missions.
- Scaling AI's impact across DOD through a common foundation that enables decentralized development and experimentation.
- Cultivating a leading AI workforce.
- Engaging with commercial, academic, and international allies and partners.
- Leading in military ethics and AI safety.
It's only a matter of time before "Bagram Batman" addresses artificial intelligence.
Since China and Russia are making "significant investments" in artificial intelligence for military applications, the report begins, the U.S. must also make investments to maintain our global military supremacy. The only difference will be our shared responsibility for protecting human rights and international norms as we develop our robotic weapons for security, peace, and stability in the long run.
The goal of American artificial intelligence is to streamline data-intensive processes and decision making, save money from labor and maintenance inefficiencies while reducing operational costs, and increase the effectiveness of military planning while reducing civilian casualties – everything from intelligence gathering to aircraft maintenance could be predicted by AI.
"Hold on guys, the computer sees the problem."
This latest move on artificial intelligence comes as Russia and the United States jointly blocked a United Nations ban on autonomous weapons, ones that can kill without the judgment of a human behind a trigger somewhere. The UN's Convention on Certain Conventional Weapons was set to place autonomous weapons alongside blinding laser beams and napalming civilian targets, according to the Campaign to Stop Killer Robots, an NGO who is moving to muster support for banning the weapons.
Russia is already testing autonomous tanks in Syria, China has already developed an unmanned submarine, and the United Kingdom wants a squadron of drones to fight in the skies above Britain – and the UN hasn't even been able to bring autonomous weapons to a vote – a vote that would require unanimous agreement for a ban.
In order to get from drone strikes to "Killer Robots," however, American AI technology has a long way to go. Even the biggest of the U.S.' tech companies aren't stepping up to help develop the tech required. Until then, AI diagnostics will have to do.