New guidelines on Artificial Intelligence will not solve the ethics p roblems

Real Clear Defense
Updated onOct 30, 2020
1 minute read
Technology photo

SUMMARY

The European Commission has just published a white paper setting out guidelines for artificial intelligen…

The European Commission has just published a white paper setting out guidelines for artificial intelligence. They hope to 'address the risks associated with certain uses of this new technology,' including concerns around privacy and human dignity.


Meanwhile, the Pentagon is finalizing its own rules on the ethical use of artificial intelligence, following a draft of the rules published in October. Like the European Commission, the Department of Defense is determined to keep people in the loop: 'Human beings should… remain responsible for the development, deployment, use and outcomes of DoD AI systems,' they say.

These are only the latest steps to catch up with the new technology. Oxford University has already launched a special program to keep artificial intelligence on the straight and narrow, while the World Economic Forum and others have highlighted the profound moral implications of AI.

The root of the fear is this: whereas earlier technological advances affected just our actions, artificial intelligence is replacing our thoughts. Labor-saving devices are morphing into a decision-making machine, and they operate in a moral vacuum.

Until now, the tough calls have fallen to programmers who write algorithms. Although the point of AI is that machines learn for themselves, the course they choose is set from the start by people who write the code.

The stakes become enormous with the sort of automated decisions in defense the Pentagon is considering. And as long as the West's main adversaries – Russia and China – are speeding forward with artificial intelligence, NATO countries have to stay ahead. Jack Shanahan, the Pentagon's AI chief, has said that the US is locked 'In a contest for the character of the international order in the digital age.'

While both the Pentagon and the European Commission are right to be alarmed at the profound ethical dimension to artificial intelligence, they are wrong to presume AI raises new ethical problems. It doesn't – it just repaints old ones in technicolor. These problems have never been properly solved, and probably never will be.

Consider just these three of the most worrisome dilemmas Artificial Intelligence is said to create.

  1. If forced to choose, should a car driven by AI technology kill a pregnant woman or the two kids? The dilemma is no different when there's a human at the wheel.
  2. Should algorithms draw on people's race, gender or religion if it makes them more efficient? It's been a question for airline security since 9/11.
  3. When our enemy has automated their battlefield machines so they can deploy them more quickly, should we do the same to keep up? This one pre-dates the First World War.

In fact, all the problems which haunt artificial intelligence coders today can be traced back to conundrums, which vexed the ancient Greek philosopher Aristotle. The 23 centuries since he died have seen attempts to solve them, and some progress, but we still lack a definite answer to the fundamental question, 'What should we do?' Neither today's European Commission paper nor the emerging Pentagon proposals will take us closer to a solution.

Some in the tech world – especially those who have cracked previously uncrackable problems before – are hoping money and brains will lead to a solution. After all, the determined genius of Newton and Einstein solved physics. Why can't a little investment solve ethics in the same way?

The answer is that ethics is, at heart, a different sort of problem.

When we humans make decisions, we instinctively locate right and wrong in several places at once: in the motives behind the choice, in the type of action we do, and in the consequences, we bring about. Only if you focus on just a single place in the decision-making process – just on consequences, say - then you can rank and compare every option, but inevitably you will miss something out.

So, if an AI system was certain what to do when good deeds lead to a bad outcome, or when bad motives help people out, we should be very wary: it would be offering moral clarity when really there wasn't any.

It is just conceivable that AI, rather than being a cause of moral problems, could help solve them. By using big data to anticipate the future and by helping us work out what would happen if everybody followed certain rules, artificial intelligence makes rule- and consequence-based ethics much easier. Applied thoughtfully, AI could help answer some tricky moral quandaries. In a few years, the best ethical advice may even come from an app on our phones.

Both the European Commission and Pentagon approach leave that possibility open.

It wouldn't mean the profound ethical problems raised by artificial ethics had been solved, though. They will never be solved – because they do not have a single, certain solution.

This article originally appeared on Real Clear Defense. Follow @RCDefense on Twitter.

SHARE