MIGHTY TACTICAL
Isobel Asher Hamilton

Former Google engineer warns against building robot weapons

(U.S. Navy photo by Mass Communication Specialist 2nd Class Timothy Walter)

A former Google engineer who worked on the company's infamous military drone project has sounded a warning against the building of killer robots.

Laura Nolan had been working at Google four years when she was recruited to its collaboration with the US Department of Defense, known as Project Maven, in 2017, according to the Guardian. Project Maven was focused on using AI to enhance military drones, building AI systems which would be able to single out enemy targets and distinguish between people and objects.

Google canned Project Maven after employee outrage, with thousands of employees signing a petition against the project and about a dozen quitting in protest. Google allowed the contract to lapse in March 2019. Nolan herself resigned after she became "increasingly ethically concerned" about the project, she said.


Nolan described her role as a site reliability engineer, and this is why she was recruited to Maven. "Although I was not directly involved in speeding up the video footage recognition I realised that I was still part of the kill chain; that this would ultimately lead to more people being targeted and killed by the US military in places like Afghanistan," she said, according to The Guardian.

A MQ-9 Reaper US military unmanned aerial vehicle.

(U.S. Air Force photo by Staff Sgt. Brian Ferguson)

Nolan fears that the next step beyond AI-enabled weapons like drones could be fully autonomous AI weapons. "What you are looking at are possible atrocities and unlawful killings even under laws of warfare, especially if hundreds or thousands of these machines are deployed," she said.

She said that any number of unpredictable factors could mess with the weapon's systems in unforeseen ways such as unexpected radar signals, unusual weather, or they could come across people carrying weapons for reasons other than warfare, such as hunting. "The machine doesn't have the discernment or common sense that the human touch has," she said.

She added that testing will have to take place out on the battlefield. "The other scary thing about these autonomous war systems is that you can only really test them by deploying them in a real combat zone. Maybe that's happening with the Russians at present in Syria, who knows? What we do know is that at the UN Russia has opposed any treaty let alone ban on these weapons by the way."

The autonomous ship "Sea Hunter", developed by DARPA. The US military christened an experimental self-driving warship designed to hunt for enemy submarines.

(U.S. Navy photo by John F. Williams)

Although no country has yet come forward to say it's working on fully autonomous robot weapons, many are building more and more sophisticated AI to integrate into their militaries. The US navy has a self-piloting warship, capable of spending months at sea with no crew, and Israel boasts of having drones capable of identifying and attacking targets autonomously — although at the moment they require a human middle-man to give the go-ahead.

Nolan is urging countries to declare an outright ban on autonomous killing robot, similar to conventions around the use of chemical weapons.

"Very few people are talking about this but if we are not careful one or more of these weapons, these killer robots, could accidentally start a flash war, destroy a nuclear power station and cause mass atrocities," she said.

Business Insider has contacted Nolan for comment.

This article originally appeared on Business Insider. Follow @BusinessInsider on Twitter.