The U.S. military kindly asks you to trust its death robots

Blake Stilwell
Apr 2, 2018 9:37 AM PDT
1 minute read
Drones photo

SUMMARY

In the opening days of the 2003 Iraq War, automated Patriot Missile batteries downed a British Tornado fighter, killing both pilo…

In the opening days of the 2003 Iraq War, automated Patriot Missile batteries downed a British Tornado fighter, killing both pilots. A few days later, another Patriot shot down an American F/A-18 over Southern Iraq, killing that pilot. The people manning the automated missile launchers trusted that the system would work as advertised. Why didn't it?


Benjamin Lambeth wrote in his exhaustive Iraq memoir The Unseen War that "many allied pilots believed that the Patriot posed a greater threat to them than did any [surface-to-air missile] in Iraq's inventory."

"The Patriots scared the hell out of us," one F-16 pilot remarked. In one case an Air Force pilot actually fired a HARM anti-radar missile at a Patriot battery, destroying its radar dish. No one in the Patriot crew was hurt, and the airman said, "they're now down one radar. That's one radar they can't target us with any more."

When asked if the error was human or mechanical, Lt. Gen. Ronald Kadish, then-director of the Pentagon's Missile Defense Agency said "I think it may be both."

A software malfunction in 2008 caused US. Army robots to aim at friendly targets. No one was killed because a human was at the trigger. Those robots were still in Iraq with troops as of 2009.

An analysis of the U.S. Navy's own data on the development of automated weapons says "with hundreds of programmers working on millions of lines of code for a single war robot, no one has a clear understanding of what's going on, at a small scale, across the entire code base."

An Air Force unit called the Human Trust and Interaction Branch — that interaction being between humans an automated equipment — based out of Wright-Patterson Air Force Base, Ohio is looking to study this interaction to develop new types of recon and intelligence tools via a propose research contract called "Trust in Autonomy for Human Machine Teaming."

On FedBizOpps (the official government contracting website with a domain name as trustworthy as any payday lender) the Air Force listed a contract for research in understanding "the most significant components driving trust and performance within human-robotic interaction. We need research on how to harness the socio-emotional elements of interpersonal team/trust dynamics and inject them into human-robot teams."

The $7.5 million contract listing continues with this: "These human-machine teaming dynamics can involve research studying the interplay of individual differences, machine characteristics, robot design, robot interaction patterns, human-machine interaction, and contextual facets of human-machine teaming."

In plain language, the research is focused on how humans use machines, when they trust the machines and when they don't. In the cases of when Patriot missiles shot down friendly pilots, an automated system notified the human crews via a popup that warned the machine would fire if no one stopped it. When no one did, the Patriot intercepted the allied aircraft, just as programmed.

The Air Force contract is another example of the military "not knowing what's going on." As the Air Force explores our trust issues with robots, the Navy is warning us that "early generations of such [automated] systems and robots will be making mistakes that may cost human lives."

Humans do come to trust their machines. Previous studies found that humans will bond with machines. U.S. Army Explosives Ordnance Disposal (EOD) technicians have been found to form emotional attachments to their bomb-disposal robots. They are awarded homemade medals and given names, ranks, and sometimes funerals. This level of trust could be misplaced as the bots are armed and the stakes of malfunctioning become higher.

Other current automated units in the U.S. military arsenal include Air Force Predator and Reaper drones, the Navy's Phalanx Close-In Weapon System, the Army's tracked, unmanned ground vehicle called TALON (or SWORDS) and the Marines' unmanned ground vehicle called the Gladiator.

USMC Gladiator: Are you not entertained?

Recent news about the increased ability of machines to deceive their human masters and the warnings from the scientific and computing community about the overdevelopment of artificial intelligence (AI) as weapons don't seem to be a concern, even though the Army is developing an automated RoboSniperCopter and is trying to remove humans from the battlefield altogether. This 2013 Gizmodo piece listed the robots debuted for the Army at a Fort Benning "Robotics Rodeo," featuring an entry from the company who brought you the Roomba:

Major commercial and scientific computer experts believe AI weaponry would be the third revolution of warfare, after gunpowder and nuclear arms. An open letter from this community expressed the concern that unchecked arms races of automated death robots would result in drones becoming the new AK-47 (presumably meaning cheap, deadly and ubiquitous).

The UN is urging the world's nations to ban the development of automated weapons, citing the "legal and ethical questions" the use of such a weapon would raise.The aforementioned U.S. Navy report recommended building a "warrior's code" into the weapons to keep them from killing us all once Skynet becomes self-aware.

 

NOW: 7 Ways Drones Are Ruining Everything

OR: The Use of Military Drone Aircraft Dates Back to WWI

NEWSLETTER SIGNUP

Sign up for We Are The Mighty's newsletter and receive the mighty updates!

By signing up you agree to our We Are The Mighty's Terms of Use and We Are The Mighty's Privacy Policy.

SHARE