Opiniones sobre las armas autónomas en el futuro cercano. Robots asesinos vs ventajas militares.

Podrán las máquinas discriminar sobre lo bueno o lo malo,  podrán  incorporar  valores? . Una miríada de opiniones sobre la perspectiva de armas autónomas en un futuro no muy lejano parece presagiar dramas éticos.  Independientemente de los avances en el procesamiento de computadoras, sin supervisión y control humanos, los sistemas de armas totalmente autónomos seguramente cometerán errores.  Tal vez la tecnología progresará hasta el punto de que los errores sean pocos y distantes entre sí, pero …

The autonomous ship "Sea Hunter" is shown docked in Portland, Oregon before its christening ceremony April 7, 2016The myriad of opinions on the prospect of autonomous weapons in the not-too-distant future seem to portend a hopeless situation—the U.S. is either doomed to defeat by an enemy who fully embraces war-fighting advantage through full autonomy, or Americans are destined to see the complete erosion of humanity and human rights principles which have long guided U.S. forces in combat. To some extent, both sides are right. Senior U.S. leadership can neither dismiss ethical concerns nor cede technological advantage to potential adversaries. However, they should recognize that America’s moral judgment on autonomous weapons is likely to change based on context. So rather than completely disavow full autonomy in favor of concepts like human-machine teaming, DoD leadership should look toward systems capable of both—just like it’s done for decades—with adjustable degrees of autonomy. Such systems would enhance warfighting with a man-in-the-loop today, and provide even greater capability if needed in future major wars.

Regardless of advances in computer processing, without human oversight and control, fully autonomous weapon systems will make mistakes. Perhaps technology will progress to the point where mistakes are few and far between, but they will happen. Proponents of this argument are right; when those mistakes occur in a system’s ability to discriminate between enemy combatant and innocent civilian, atrocities will result.

Of course, the sad reality is that unintended civilian casualties occur in every conflict, even when a human controls the weapon and makes the decisions. Two questions arise then. First, will lethal autonomous systems be more or less likely to make mistakes than those operated by humans? And second, are civilian casualties at the hands of a human inherently less reprehensible than at the hands of a computer?

On the first question, while artificial intelligence and machine learning may not yet be on par with human ability to discriminate targets from non-targets, it is trending ever closer. And a computer can process inputs, weighing them against decision-making criteria, exponentially faster than a human without worry of emotion or fatigue degrading judgment. So, technology may be on its way to a point where computers are able to make decisions faster and better than a human. This would make them less prone to cause inadvertent civilian casualties than human-controlled weapons.

On the second question, the rational argument is that there is no difference between accidental civilian deaths by human or by computer. In fact, the likelihood that autonomous systems could result in fewer unintended deaths should outweigh any qualms about man vs. machine responsibility for those few that do still occur. However, humans are not purely rational (PDF). In American society, for reasons related to emotions, moral beliefs, and perhaps the need for human accountability, there is an innate difference. Some believe unintended civilian deaths at the hands of a cold, unfeeling machine are somehow worse than the same deaths at the hands of a living, breathing human, whose conscience will bear the burden. Senior defense leadership should acknowledge this. To try and arm-wave it away with a numbers-based argument—however rational—risks losing public trust in America’s defense institutions.…

Fuente: http://www.rand.org