La revolución que no fue: cómo los drones con IA han fracasado en Ucrania (hasta ahora)

“La guerra en Ucrania está provocando una revolución en la guerra con aviones no tripulados que utilizan inteligencia artificial”, decía a todo volumen un titular del Washington Post en julio pasado. Luego, en otoño, una avalancha de informes decía que tanto Rusia como Ucrania habían desplegado pequeños drones que utilizaban inteligencia artificial para identificar y localizar objetivos. Tener IA a bordo significaba que los drones, versiones del Russian Lancet y el ucraniano Saker Scout , no necesitarían un operador humano para guiarlos hasta el impacto. Si esta IA  hubiera demostrado su eficacia en la batalla, realmente habría sido una revolución.


WASHINGTON — “The war in Ukraine is spurring a revolution in drone warfare using AI,” blared a Washington Post headline last July. Then, in the fall, a flurry of reports said that both Russia and Ukraine had deployed small drones that used artificial intelligence to identify and home in on targets. Having on-board AI meant that the drones, versions of the Russian Lancet and the Ukrainian Saker Scout, wouldn’t need a human operator to guide them all the way to impact.

If this AI had proved itself in battle, it really would have been a revolution. Electronic warfare systems designed to disrupt the operator’s control link — or worse, trace the transmission to its source for a precision strike — would have been largely useless against self-guided drones. Skilled and scarce drone jockeys could have been replaced by thousands of conscripts quickly trained to point-and-click on potential targets. And instead of every drone requiring an operator staring at its video feed full-time, a single human could have overseen a swarm of lethal machines.

All told, military AI would have taken a technically impressive and slightly terrifying step towards independence from human control, like Marvel’s Ultron singing Pinnochio’s “I’ve got no strings on me.” Instead, after more than four months of frontline field-testing, neither sides’ AI-augmented drones seem to have made a measurable impact.

In early February, a detailed report from the Center for a New American Security dismissed the AI drones in a few lines. “The Lancet-3 was advertised as having autonomous target identification and engagement, although these claims are unverified,” wrote CNAS’s defense program director, Stacie Pettyjohn. “Both parties claim to be using artificial intelligence to improve the drone’s ability to hit its target, but likely its use is limited.”

Then, on February 14, an independent analysis suggested that the Russians, at least, had turned their Lancet’s AI-guidance feature off. Videos of Lancet operators’ screens, posted online since the fall, often included a box around the target, one capable of moving as the target moved, and a notification saying “target locked,” freelance journalist David Hambling posted on Forbes. Those features would require some form of algorithmic object-recognition, although it’s impossible to tell from video alone whether it was merely highlighting the target for the human operator or actively guiding the drone to hit it. 

However, “none of the Lancet videos from the last two weeks or so seem to have the ‘Target Locked’ or the accompanying bounding box,” Hambling continued. “The obvious conclusion is that automated target recognition software was rolled out prematurely and there has been product recall.”

Don’t Believe The (AI) Hype

It’s impossible to confirm Hambling’s analysis without access to Russian military documents or the drone’s software code. But Pettyjohn and two other drone experts — both fluent Russian-speakers who are normally enthusiastic about the technology — agreed that Hambling’s interpretation was not only plausible but probable.

“This is a fairly detailed analysis, looks about right to me,” said Alexander Kott, former chief scientist at the Army Research Laboratory, in an email calling Breaking Defense’s attention to the Forbes piece. “It is difficult to know for sure…I have not seen an independent confirmation, and I don’t think one can even exist.”

“I think it’s accurate,” said Sam Bendett of CNA, a think tank with close ties to the Pentagon, in an email exchange with Breaking Defense. (Bendett also spoke to Hambling for his story).

“This technology needs a lot of testing and evaluation, this technology needs a lot of iteration, [and] many times the technology isn’t ready,” he had told Breaking Defense before the Forbes story was published. “I think it’s a slow roll because both sides want to get it right. Once they get it right, they’re going to scale it up.

“This is in fact technologically possible,” Bendett said. “Whoever gains a breakthrough in drone technology and quickly scales it up gains a huge advantage.”

But that breakthrough clearly hasn’t happened here, Pettyjohn told Breaking Defense. “Russian industry often makes pretty outlandish claims about its weapons’ capabilities, and in practice we find that their performance is much less than promised … This has been most prominent with autonomous systems, as Sam Bendett and Jeff Edmonds found in their CNA report on uncrewed systems in Ukraine.”

The Ukrainians don’t seem to have done better, despite similar media hype.

“There are lots of really exciting reports out there about the Saker Scout and the autonomous target recognition software that the Ukrainians have been developing,” Pettyjohn said. “If Saker Scout does what it’s supposed to …. it could go off, find a target, and decide to kill it all on its own without a human intervening.”

“Whether it can actually do this… it’s hard to sift through,” she continued. “I am definitely on the skeptical side.”

The Real AI Revolution – Date TBD

So what would it really take for Russia and Ukraine — or for that matter, the US or China — to replace a human operator with AI? After all, the brain is a biological neural network, honed over millions of years of evolution to take in a dazzling array of sensory data (visual, audio, smell, vibration), update an internal 3D model of the external world, then formulate and execute complex plans of action in near-real time.

For AI to match that capability, it needs what theorists of combat call “situational awareness,” Kott told Breaking Defense. “[Like] any soldier… they need to see what’s happening around them.” That requires not just object recognition — which AI finds hard enough — but the ability to observe an object in motion and deduce what action it is in the middle of performing, Kott argues.

That’s a task that humans do from infancy. Think of a baby saying “mmmm” when put in their high chair, even before any food is visible: That’s actually a complex process of observing, turning those sensory inputs into intelligible data about the world, matching that new data with old patterns in memory, and making inferences about the future. One of the most famous maxims in AI, Moravec’s Paradox, is that tasks humans take for granted can be confoundingly difficult for a machine.

Even humans struggle to understand what’s going on when under stress, in danger, and facing deliberate deception. Ukrainian decoys — fake HIMARS rocket launchers, anti-aircraft radars, and so on — routinely trick Russian drone operators and artillery officers into wasting ordnance on fakes while leaving the well-camouflaged decoys alone, and machine-vision algorithms have proven even easier to deceive. Combatants must also keep watch for danger, from obviously visible ones the human brain’s evolved to recognize — someone charging at you, screaming — to high-tech threats unaided human senses can’t perceive, like electronic warfare or targeting lasers locking on. A properly equipped machine can detect radio waves and laser beams, but its AI still needs to make sense of that incoming data, assess which threats are most dangerous, and decide how to defend itself, in seconds.

But the difficulty doesn’t stop there: Combatants must fight together as a team, the way human have since the first Stone Age tribe ambushed another. Compared to rifle marksmanship and other individual skills, collective “battle drills,” team-building, and protocols for clear communication under fire consume a tremendous amount of time in training. So great-power projects for military AI — both America’s Joint All-Domain Command & Control and China’s “informatized warfare” — focus not just on firepower but on coordination, using algorithms to share battle data directly from one robotic system to another without need for a human intermediary.

So the next step towards effective warfighting AIs, Pettyjohn said, “is really networking it together and thinking about how they’re sharing that information [and] who’s actually authorized to shoot. Is it the drone?

Such complex digital decision-making requires sophisticated software, which needs to run on high-speed chips, which in turn need power, cooling, protection from vibration and electronic interference, and more. None of that is easy for engineers to cram into the kind of small drones being used widely by both sides in Ukraine. Even the upgraded Lancet-3 fits less than seven pounds (3 kg) of explosive warhead, leaving little room for a big computer brain.

The requisite engineering — and the cost — may prove too much for Russia or, especially, Ukraine, many of whose drones are hand-built from mail-order parts. “Given the very low cost of current FPV [First-Person View] drones, and the fact that many of them are assembled by volunteers literally on their kitchen table… thee cost-benefit tradeoffs likely remain uncertain,” Kott told Breaking Defense.

“The reason you’re seeing so many drones [is] that they’re cheap,” Pettyjohn agreed. “On both sides…they’re not investing in increased defenses against jamming… because it would make them too expensive to afford in the numbers that they’re needed. They’d rather just buy lots of them and count on some of them making it through.”

So even if Russia or Ukraine can implement on-board AI, she said, “it’s not clear to me it will scale in this conflict, because it depends a lot on the cost.”

However, that doesn’t mean AI won’t scale up in other conflicts with other combatants, especially high-tech nations with big defense budgets like the US and China. But even for those superpowers miniaturizing AI to fit on drones is daunting: There’s good reason headline-grabbing AIs like ChatGPT run on massive server farms.

But that does not make the problem impossible to solve — or that it has to be solved 100 percent. AI still glitches and hallucinates, but humans make deadly errors all the time, in and out of combat. A civilian analogy is self-driving cars: They don’t need to avoid 100 percent of accidents to be an improvement over human drivers.

By definition, in any group of humans, performing any given task, “fifty percent of people will be below average,” Kott noted. “If you can do better than ‘below average,’ you’re already doubled effectiveness of your operations.”

Even modest improvements can have major impacts when you’re waging war on a massive scale, as in Ukraine — or any future US-China conflict. “It doesn’t have to be 100 percent,” Kott said. “In many cases 20 percent is good enough, much better than nothing.”

Western demands for high performance don’t mesh with the realities of major war, he warned. “We demand complete reliability, we demand complete accuracy, [because] we are not in existential danger, like Ukraine,” Kott said. “Ukrainians don’t specify perfection. They can’t afford that.”

Fuente: https://breakingdefense.com