The futurists over at DARPA are pursuing a vision that most of us knew was coming: creating artificial intelligence that can outperform human pilots in dogfights, can survive more Gs in flight without expensive life support, and can be mass produced. But it turns out, DARPA doesn’t think dogfights are the real reason the technology is needed.
DARPA is working “mosaic warfare,” a vision of warfighting that sees complex systems working together to overcome an adversary. Basically, a military force would be deployed across a wide front, but the sensors and command and control would be split across multiple platforms, many of them controlled by artificial intelligence.
So, even if the enemy manages to take out multiple armored vehicles, planes, or other platforms, the good guys would still have plenty of sensors and computing power.
And those remaining platforms would be lethal. The humans making the decisions would be in tanks or other vehicles, and they would have their own weapons as well as control of the dozens of weapons on the AI-controlled vehicles. Think multiple armored vehicles, a couple of artillery platforms, and maybe some drones in the sky.
In this vision of the future, it’s easy to see why dogfighting drones would be valuable. Human pilots could stay relatively safe to the rear while commanding the weapons of those robot dogfighters at the front. But the real reason DARPA wants the robots to be good at dogfighting is just so human pilots will accept them.
A DARPA graphic illustrates how manned and unmanned systems could work together in fighter engagements.
From a DARPA release titled Training AI to Win a Dogfight:
Turning aerial dogfighting over to AI is less about dogfighting, which should be rare in the future, and more about giving pilots the confidence that AI and automation can handle a high-end fight. As soon as new human fighter pilots learn to take-off, navigate, and land, they are taught aerial combat maneuvers. Contrary to popular belief, new fighter pilots learn to dogfight because it represents a crucible where pilot performance and trust can be refined. To accelerate the transformation of pilots from aircraft operators to mission battle commanders — who can entrust dynamic air combat tasks to unmanned, semi-autonomous airborne assets from the cockpit — the AI must first prove it can handle the basics.
Basically, DARPA doesn’t want robot dogfighters so they can win dogfights. After all, dogfighting is relatively rare now, and it doesn’t matter much if we lose one or two robots in dogfights because they’re cheap to replace anyway. But DARPA knows that pilots trust good dogfighters, so an AI that would be accepted by them must be good at dogfighting.
Once they’re in frontline units, the robots are more likely to act as missile carriers and sensor platforms than true dogfighters. Their mission will be to hunt down threats on the ground and in the sky and, at a command from the human, destroy them. It’s likely that the destruction will be conducted from beyond visual range and with little threat to the robot or the human pilot that it’s protecting.
This may sound like far future stuff, but it’s likely DARPA will find solid proposals fast. They’re soliciting proposals through May 17, but University of Cincinnati students created an AI named ALPHA in 2016 that repeatedly defeated a retired Air Force colonel in simulated dogfights. If that AI and similar tech can be properly adapted to work in current or future fighter aircraft, then we’ll be off to the races.