{"id":12940,"date":"2023-08-17T11:43:05","date_gmt":"2023-08-17T14:43:05","guid":{"rendered":"https:\/\/www.fie.undef.edu.ar\/ceptm\/?p=12940"},"modified":"2023-08-17T11:43:05","modified_gmt":"2023-08-17T14:43:05","slug":"dentro-de-la-desordenada-etica-de-hacer-la-guerra-con-las-maquinas","status":"publish","type":"post","link":"https:\/\/www.fie.undef.edu.ar\/ceptm\/?p=12940","title":{"rendered":"Dentro de la desordenada \u00e9tica de hacer la guerra con las m\u00e1quinas"},"content":{"rendered":"<p>La IA se est\u00e1 abriendo camino en la toma de decisiones en la batalla. \u00bfQui\u00e9n tiene la culpa cuando algo sale mal? Entonces, no deber\u00eda sorprender que los estados y la sociedad civil hayan abordado la cuesti\u00f3n de las armas aut\u00f3nomas inteligentes, armas que pueden seleccionar y disparar sobre objetivos sin intervenci\u00f3n humana, como un asunto de gran preocupaci\u00f3n.<\/p>\n<hr \/>\n<div>\n<div class=\"gutenbergContent__content--1FgGp html_0\">\n<p>In a near-future war\u2014one that might begin tomorrow, for all we know\u2014a soldier takes up a shooting position on an empty rooftop. His unit has been fighting through the city block by block. It feels as if enemies could be lying in silent wait behind every corner, ready to rain fire upon their marks the moment they have a shot.<\/p>\n<p>Through his gunsight, the soldier scans the windows of a nearby building. He notices fresh laundry hanging from the balconies. Word comes in over the radio that his team is about to move across an open patch of ground below. As they head out, a red bounding box appears in the top left corner of the gunsight. The device\u2019s computer vision system has flagged a potential target\u2014a silhouetted figure in a window is drawing up, it seems, to take a shot.<\/p>\n<\/div>\n<\/div>\n<div>\n<div class=\"gutenbergContent__content--1FgGp html_2\">\n<p>The soldier doesn\u2019t have a clear view, but in his experience the system has a superhuman capacity to pick up the faintest tell of an enemy. So he sets his crosshair upon the box and prepares to squeeze the trigger.<\/p>\n<p>In different war, also possibly just over the horizon, a commander stands before a bank of monitors. An alert appears from a chatbot. It brings news that satellites have picked up a truck entering a certain city block that has been designated as a possible staging area for enemy rocket launches. The chatbot has already advised an artillery unit, which it calculates as having the highest estimated \u201ckill probability,\u201d to take aim at the truck and stand by.<\/p>\n<p>According to the chatbot, none of the nearby buildings is a civilian structure, though it notes that the determination has yet to be corroborated manually. A drone, which had been dispatched by the system for a closer look, arrives on scene. Its video shows the truck backing into a narrow passage between two compounds. The opportunity to take the shot is rapidly coming to a close.<\/p>\n<p>For the commander, everything now falls silent. The chaos, the uncertainty, the cacophony\u2014all reduced to the sound of a ticking clock and the sight of a single glowing button:<\/p>\n<p><strong>\u201cAPPROVE FIRE ORDER.\u201d <\/strong><\/div>\n<div><\/div>\n<div class=\"gutenbergContent__content--1FgGp html_2\">To pull the trigger\u2014or, as the case may be, not to pull it. To hit the button, or to hold off. Legally\u2014and ethically\u2014the role of the soldier\u2019s decision in matters of life and death is preeminent and indispensable. Fundamentally, it is these decisions that define the human act of war.<\/div>\n<\/div>\n<div><\/div>\n<div>\n<p>It should be of little surprise, then, that states and civil society have taken up the question of intelligent autonomous weapons\u2014weapons that can select and fire upon targets without any human input\u2014as a matter of serious concern. In May, after close to a decade of discussions, parties to the UN\u2019s Convention on Certain Conventional Weapons agreed, among other recommendations, that militaries using them probably need to \u201climit the duration, geographical scope, and scale of the operation\u201d to comply with the laws of war. The line was nonbinding, but it was at least an acknowledgment that a human has to play a part\u2014somewhere, sometime\u2014in the immediate process leading up to a killing.<\/p>\n<p>But intelligent autonomous weapons that fully displace human decision-making have (likely) yet to see real-world use. Even the \u201cautonomous\u201d drones and ships fielded by the US and other powers are used under close human supervision. Meanwhile, intelligent systems that merely guide the hand that pulls the trigger have been gaining purchase in the warmaker\u2019s tool kit. And they\u2019ve quietly become sophisticated enough to raise novel questions\u2014ones that are trickier to answer than the well-\u00adcovered wrangles over killer robots and, with each passing day, more urgent: What does it mean when a decision is only part human and part machine? And when, if ever, is it ethical for that decision to be a decision to kill?<\/p>\n<p>For a long time, the idea of supporting a human decision by computerized means wasn\u2019t such a controversial prospect. Retired Air Force lieutenant general Jack Shanahan says the radar on the F4 Phantom fighter jet he flew in the 1980s was a decision aid of sorts. It alerted him to the presence of other aircraft, he told me, so that he could figure out what to do about them. But to say that the crew and the radar were coequal accomplices would be a stretch.<\/p>\n<p>That has all begun to change. \u201cWhat we\u2019re seeing now, at least in the way that I see this, is a transition to a world [in] which you need to have humans and machines \u2026 operating in some sort of team,\u201d says Shanahan.<\/p>\n<p>The rise of machine learning, in particular, has set off a paradigm shift in how militaries use computers to help shape the crucial decisions of warfare\u2014up to, and including, the ultimate decision. Shanahan was the first director of Project Maven, a Pentagon program that developed target recognition algorithms for video footage from drones. The project, which kicked off a new era of American military AI, was launched in 2017 after a study concluded that \u201cdeep learning algorithms can perform at near-\u00adhuman levels.\u201d (It also sparked controversy\u2014in 2018, more than 3,000 Google employees signed a letter of protest against the company\u2019s involvement in the project.)<\/p>\n<p>With machine-learning-based decision tools, \u201cyou have more apparent competency, more breadth\u201d than earlier tools afforded, says Matt Turek, deputy director of the Information Innovation Office at the Defense Advanced Research Projects Agency. \u201cAnd perhaps a tendency, as a result, to turn over more decision-making to them.\u201d<\/p>\n<figure id=\"attachment_12942\" aria-describedby=\"caption-attachment-12942\" style=\"width: 771px\" class=\"wp-caption aligncenter\"><img loading=\"lazy\" class=\"size-large wp-image-12942\" src=\"https:\/\/www.fie.undef.edu.ar\/ceptm\/wp-content\/uploads\/2023\/08\/Comp-2-771x1024.webp\" alt=\"\" width=\"771\" height=\"1024\" srcset=\"https:\/\/www.fie.undef.edu.ar\/ceptm\/wp-content\/uploads\/2023\/08\/Comp-2-771x1024.webp 771w, https:\/\/www.fie.undef.edu.ar\/ceptm\/wp-content\/uploads\/2023\/08\/Comp-2-226x300.webp 226w, https:\/\/www.fie.undef.edu.ar\/ceptm\/wp-content\/uploads\/2023\/08\/Comp-2-768x1020.webp 768w, https:\/\/www.fie.undef.edu.ar\/ceptm\/wp-content\/uploads\/2023\/08\/Comp-2-1157x1536.webp 1157w, https:\/\/www.fie.undef.edu.ar\/ceptm\/wp-content\/uploads\/2023\/08\/Comp-2.webp 1506w\" sizes=\"(max-width: 771px) 100vw, 771px\" \/><figcaption id=\"caption-attachment-12942\" class=\"wp-caption-text\">YOSHI SODEOKA<\/figcaption><\/figure>\n<div>\n<div class=\"gutenbergContent__content--1FgGp html_6\">\n<p>Another gunsight, built by the company Smartshooter, is advertised as having similar capabilities. According to the company\u2019s website, it can also be packaged into a remote-controlled machine gun like the one that Israeli agents used to assassinate the Iranian nuclear scientist Mohsen Fakhrizadeh in 2021.<\/p>\n<p>Decision support tools that sit at a greater remove from the battlefield can be just as decisive. The Pentagon appears to have used AI in the sequence of intelligence analyses and decisions leading up to a potential strike, a process known as a kill chain\u2014though it has been cagey on the details. In response to questions from\u00a0MIT Technology Review,\u00a0Laura McAndrews, an Air Force spokesperson, wrote that\u00a0the service \u201cis utilizing a human-\u00admachine teaming approach.\u201d<\/p>\n<blockquote class=\"wp-block-quote\">\n<p class=\"has-medium-font-size\"><strong>The range of judgment calls that go into military decision-making is vast. And it doesn\u2019t always take artificial super-intelligence to dispense with them by automated means.<\/strong><\/p>\n<\/blockquote>\n<p>Other countries are more openly experimenting with such automation. Shortly after the Israel-Palestine conflict in 2021, the Israel Defense Forces said it had used what it described as AI tools to alert troops of imminent attacks and to propose targets for operations.<\/p>\n<\/div>\n<\/div>\n<div>\n<div class=\"gutenbergContent__content--1FgGp html_8\">\n<p>The Ukrainian army uses a program, GIS Arta, that pairs each known Russian target on the battlefield with the artillery unit that is, according to the algorithm, best placed to shoot at it. A report by The Times, a British newspaper, likened it to Uber\u2019s algorithm for pairing drivers and riders, noting that it significantly reduces the time between the detection of a target and the moment that target finds itself under a barrage of firepower. Before the Ukrainians had GIS Arta, that process took 20 minutes. Now it reportedly takes one.<\/p>\n<p>Russia claims to have its own command-and-control system with what it calls artificial intelligence, but it has shared few technical details. Gregory Allen, the director of the Wadhwani Center for AI and Advanced Technologies and one of the architects of the Pentagon\u2019s current AI policies, told me it\u2019s important to take some of these claims with a pinch of salt. He says some of Russia\u2019s supposed military AI is \u201cstuff that everyone has been doing for decades,\u201d and he calls GIS Arta \u201cjust traditional software.\u201d<\/p>\n<p>The range of judgment calls that go into military decision-making, however, is vast. And it doesn\u2019t always take artificial super-\u00adintelligence to dispense with them by automated means. There are tools for predicting enemy troop movements, tools for figuring out how to take out a given target, and tools to estimate how much collateral harm is likely to befall any nearby civilians.<\/p>\n<p>None of these contrivances could be called a killer robot. But the technology is not without its perils. Like any complex computer, an AI-based tool might glitch in unusual and unpredictable ways; it\u2019s not clear that the human involved will always be able to know when the answers on the screen are right or wrong. In their relentless efficiency, these tools may also not leave enough time and space for humans to determine if what they\u2019re doing is legal. In some areas, they could perform at such superhuman levels that something ineffable about the act of war could be lost entirely.<\/p>\n<div id=\"piano__post_body-mobile-3\" class=\"piano__post_body\"><\/div>\n<p>Eventually militaries plan to use machine intelligence to stitch many of these individual instruments into a single automated network that links every weapon, commander, and soldier to every other. Not a kill chain, but\u2014as the Pentagon has begun to call it\u2014a kill web.<\/p>\n<div class=\"wp-block-group is-layout-constrained\">\n<div class=\"wp-block-group__inner-container\">\n<p>In these webs, it\u2019s not clear whether the human\u2019s decision is, in fact, very much of a decision at all. Rafael, an Israeli defense giant, has already sold one such product, Fire Weaver, to the IDF (it has also demonstrated it to the US Department of Defense and the German military). According to company materials, Fire Weaver finds enemy positions, notifies the unit that it calculates as being best placed to fire on them, and even sets a crosshair on the target directly in that unit\u2019s weapon sights. The human\u2019s role, according to one video of the software, is to choose between two buttons: \u201cApprove\u201d and \u201cAbort.\u201d<\/p>\n<hr class=\"wp-block-separator has-alpha-channel-opacity\" \/>\n<p>Let\u2019s say that the silhouette in the window was not a soldier, but a child. Imagine that the truck was not delivering warheads to the enemy, but water pails to a home.<\/p>\n<\/div>\n<\/div>\n<p>Of the DoD\u2019s five \u201c<a href=\"https:\/\/www.defense.gov\/News\/Releases\/Release\/Article\/2091996\/dod-adopts-ethical-principles-for-artificial-intelligence\/\" target=\"_blank\" rel=\"noopener\">ethical principles for artificial intelligence<\/a>,\u201d which are phrased as qualities, the one that\u2019s always listed first is \u201cResponsible.\u201d In practice, this means that when things go wrong, someone\u2014a human, not a machine\u2014has got to hold the bag.<\/p>\n<p>Of course, the principle of responsibility long predates the onset of artificially intelligent machines. All the laws and mores of war would be meaningless without the fundamental common understanding that every deliberate act in the fight is always on\u00a0<em>someone.<\/em>\u00a0But with the prospect of computers taking on all manner of sophisticated new roles, the age-old precept has newfound resonance.<\/p>\n<blockquote>\n<p id=\"4781-val-contentList\" class=\"contentList__header\">Of the Department of Defense\u2019s 5 \u201cethical principles for artificial intelligence,\u201d which are phrased as qualities, the one that\u2019s always listed first is \u201cResponsible.\u201d<\/p>\n<\/blockquote>\n<div>\n<div class=\"gutenbergContent__content--1FgGp html_10\">\n<p>\u201cNow for me, and for most people I ever knew in uniform, this was core to who we were as commanders: that somebody ultimately will be held responsible,\u201d says Shanahan, who after Maven became the inaugural director of the Pentagon\u2019s Joint Artificial Intelligence Center and oversaw the development of the AI ethical principles.<\/p>\n<\/div>\n<\/div>\n<div>\n<div class=\"gutenbergContent__content--1FgGp html_12\">\n<p>This is why a human hand must squeeze the trigger, why a human hand must click \u201cApprove.\u201d If a computer sets its sights upon the wrong target, and the soldier squeezes the trigger anyway, that\u2019s on the soldier. \u201cIf a human does something that leads to an accident with the machine\u2014say, dropping a weapon where it shouldn\u2019t have\u2014that\u2019s still a human\u2019s decision that was made,\u201d Shanahan says.<\/p>\n<p>But accidents happen. And this is where things get tricky. Modern militaries have spent hundreds of years figuring out how to differentiate the unavoidable, blameless tragedies of warfare from acts of malign intent, misdirected fury, or gross negligence. Even now, this remains a difficult task. Outsourcing a part of human agency and judgment to algorithms built, in many cases, around the mathematical principle of optimization will challenge all this law and doctrine in a fundamentally new way, says Courtney Bowman, global director of privacy and civil liberties engineering at Palantir, a US-headquartered firm that builds data management software for militaries, governments, and large companies.<\/p>\n<p>\u201cIt\u2019s a rupture. It\u2019s disruptive,\u201d Bowman says. \u201cIt requires a new ethical construct to be able to make sound decisions.\u201d<\/p>\n<p>This year, in a move that was inevitable in the age of ChatGPT, Palantir announced that it is developing software called the Artificial Intelligence Platform, which allows for the integration of large language models into the company\u2019s military products. In a demo of AIP posted to YouTube this spring, the platform alerts the user to a potentially threatening enemy movement. It then suggests that a drone be sent for a closer look, proposes three possible plans to intercept the offending force, and maps out an optimal route for the selected attack team to reach them.<\/p>\n<div id=\"piano__post_body-mobile-4\" class=\"piano__post_body\"><\/div>\n<p>And yet even with a machine capable of such apparent cleverness, militaries won\u2019t want the user to blindly trust its every suggestion. If the human presses only one button in a kill chain, it probably should not be the \u201cI believe\u201d button, as a concerned but anonymous Army operative once put it in a DoD war game in 2019.<\/p>\n<p>In a program called Urban Reconnaissance through Supervised Autonomy (URSA), DARPA built a system that enabled robots and drones to act as forward observers for platoons in urban operations. After input from the project\u2019s advisory group on ethical and legal issues, it was decided that the software would only ever designate people as \u201cpersons of interest.\u201d Even though the purpose of the technology was to help root out ambushes, it would never go so far as to label anyone as a \u201cthreat.\u201d<\/p>\n<p>This, it was hoped, would stop a soldier from jumping to the wrong conclusion. It also had a legal rationale, according to Brian Williams, an adjunct research staff member at the Institute for Defense Analyses who led the advisory group. No court had positively asserted that a machine could legally designate a person a threat, he says. (Then again, he adds, no court had specifically found that it would be illegal, either, and he acknowledges that not all military operators would necessarily share his group\u2019s cautious reading of the law.) According to Williams, DARPA initially wanted URSA to be able to autonomously discern a person\u2019s intent; this feature too was scrapped at the group\u2019s urging.<\/p>\n<p>Bowman says Palantir\u2019s approach is to work \u201cengineered inefficiencies\u201d into \u201cpoints in the decision-\u00admaking process where you actually do want to slow things down.\u201d For example, a computer\u2019s output that points to an enemy troop movement, he says, might require a user to seek out a second corroborating source of intelligence before proceeding with an action (in the video, the Artificial Intelligence Platform does not appear to do this).<\/p>\n<blockquote>\n<p class=\"has-medium-font-size\"><strong>\u201cIf people of interest are identified on a screen as red dots, that\u2019s going to have a different subconscious implication than if people of interest are identified on a screen as little happy faces.\u201d <\/strong><cite>Rebecca Crootof, law professor at the University of Richmond<\/cite><\/p>\n<\/blockquote>\n<\/div>\n<\/div>\n<\/div>\n<\/div>\n<\/div>\n<div>\n<div>\n<div class=\"gutenbergContent__content--1FgGp html_14\">\n<p>In the case of AIP, Bowman says the idea is to present the information in such a way \u201cthat the viewer understands, the analyst understands, this is only a suggestion.\u201d In practice, protecting human judgment from the sway of a beguilingly smart machine could come down to small details of graphic design. \u201cIf people of interest are identified on a screen as red dots, that\u2019s going to have a different subconscious implication than if people of interest are identified on a screen as little happy faces,\u201d says Rebecca Crootof, a law professor at the University of Richmond, who has written extensively about the challenges of accountability in human-in-the-loop autonomous weapons.<\/p>\n<p>In some settings, however, soldiers might only want an \u201cI believe\u201d button. Originally, DARPA envisioned URSA as a wrist-worn device for soldiers on the front lines. \u201cIn the very first working group meeting, we said that\u2019s not advisable,\u201d Williams told me. The kind of engineered inefficiency necessary for responsible use just wouldn\u2019t be practicable for users who have bullets whizzing by their ears. Instead, they built a computer system that sits with a dedicated operator, far behind the action.<\/p>\n<p>But some decision support systems are definitely designed for the kind of split-second decision-\u00admaking that happens right in the thick of it. The US Army has said that it has managed, in live tests, to shorten its own 20-minute targeting cycle to 20 seconds. Nor does the market seem to have embraced the spirit of restraint. In demo videos posted online, the bounding boxes for the computerized gunsights of both Elbit and Smartshooter are blood red.<\/p>\n<hr class=\"wp-block-separator has-alpha-channel-opacity\" \/>\n<p>Other times, the computer will be right and the human will be wrong.<\/p>\n<p>If the soldier on the rooftop had second-guessed the gunsight, and it turned out that the silhouette was in fact an enemy sniper, his teammates could have paid a heavy price for his split second of hesitation.<\/p>\n<p>This is a different source of trouble, much less discussed but no less likely in real-world combat. And it puts the human in something of a pickle. Soldiers will be told to treat their digital assistants with enough mistrust to safeguard the sanctity of their judgment. But with machines that are often right, this same reluctance to defer to the computer can itself become a point of avertable failure.<\/p>\n<p>Aviation history has no shortage of cases where a human pilot\u2019s refusal to heed the machine led to catastrophe. These (usually perished) souls have not been looked upon kindly by investigators seeking to explain the tragedy. Carol J. Smith, a senior research scientist at Carnegie Mellon University\u2019s Software Engineering Institute who helped craft responsible AI guidelines for the DoD\u2019s Defense Innovation Unit, doesn\u2019t see an issue: \u201cIf the person in that moment feels that the decision is wrong, they\u2019re making it their call, and they\u2019re going to have to face the consequences.\u201d<\/p>\n<p>For others, this is a wicked ethical conundrum. The scholar M.C. Elish has suggested that a human who is placed in this kind of impossible loop could end up serving as what she calls a \u201cmoral crumple zone.\u201d In the event of an accident\u2014regardless of whether the human was wrong, the computer was wrong, or they were wrong together\u2014the person who made the \u201cdecision\u201d will absorb the blame and protect everyone else along the chain of command from the full impact of accountability.<\/p>\n<\/div>\n<\/div>\n<div>\n<div class=\"gutenbergContent__content--1FgGp html_16\">\n<p>In an essay, Smith wrote that the \u201clowest-paid person\u201d should not be \u201csaddled with this responsibility,\u201d and neither should \u201cthe highest-paid person.\u201d Instead, she told me, the responsibility should be spread among everyone involved, and the introduction of AI should not change anything about that responsibility.<\/p>\n<p>In practice, this is harder than it sounds. Crootof points out that even today, \u201cthere\u2019s not a whole lot of responsibility for accidents in war.\u201d As AI tools become larger and more complex, and as kill chains become shorter and more web-like, finding the right people to blame is going to become an even more labyrinthine task.<\/p>\n<p>Those who write these tools, and the companies they work for, aren\u2019t likely to take the fall. Building AI software is a lengthy, iterative process, often drawing from open-source code, which stands at a distant remove from the actual material facts of metal piercing flesh. And barring any significant changes to US law, defense contractors are generally protected from liability anyway, says Crootof.<\/p>\n<div>\n<div class=\"gutenbergContent__content--1FgGp html_18\">\n<p>Any bid for accountability at the upper rungs of command, meanwhile, would likely find itself stymied by the heavy veil of government classification that tends to cloak most AI decision support tools and the manner in which they are used. The US Air Force has not been forthcoming about whether its AI has even seen real-world use.\u00a0Shanahan says Maven\u2019s AI models were deployed for intelligence analysis soon after the project launched, and in 2021 the secretary of the Air Force said that \u201cAI algorithms\u201d had recently been applied \u201cfor the first time to a live operational kill chain,\u201d with an Air Force spokesperson at the time adding that these tools were available in intelligence centers across the globe \u201cwhenever needed.\u201d But Laura McAndrews, the Air Force spokesperson, saidthat in fact these algorithms \u201cwere not applied in a live, operational kill chain\u201d and declined to detail any other algorithms that may, or may not, have been used since.<\/p>\n<p>The real story might remain shrouded for years. In 2018, the Pentagon issued a determination that exempts Project Maven from Freedom of Information requests. Last year, it handed the entire program to the National Geospatial-Intelligence Agency,which is responsible for processing \u200bAmerica\u2019s vast intake of secret aerial surveillance. Responding to questions about whether the algorithms are used in kill chains, Robbin Brooks, an NGA spokesperson, told MIT Technology Review, \u201cWe can\u2019t speak to specifics of how and where Maven is used.\u201d<\/p>\n<hr class=\"wp-block-separator has-alpha-channel-opacity\" \/>\n<p>In one sense, what\u2019s new here is also old. We routinely place our safety\u2014indeed, our entire existence as a species\u2014in the hands of other people. Those decision-\u00admakers defer, in turn, to machines that they do not entirely comprehend.<\/p>\n<p>In an exquisite essay on automation published in 2018, at a time when operational AI-enabled decision support was still a rarity, former Navy secretary Richard Danzig pointed out that if a president \u201cdecides\u201d to order a nuclear strike, it will not be because anyone has looked out the window of the Oval Office and seen enemy missiles raining down on DC but, rather, because those missiles have been detected, tracked, and identified\u2014one hopes correctly\u2014by algorithms in the air defense network.<\/p>\n<p>As in the case of a commander who calls in an artillery strike on the advice of a chatbot, or a rifleman who pulls the trigger at the mere sight of a red bounding box, \u201cthe most that can be said is that \u2018a human being is involved,\u2019\u201d Danzig wrote.<\/p>\n<\/div>\n<\/div>\n<div>\n<div class=\"gutenbergContent__content--1FgGp html_20\">\n<p>\u201cThis is a common situation in the modern age,\u201d he wrote. \u201cHuman decisionmakers are riders traveling across obscured terrain with little or no ability to assess the powerful beasts that carry and guide them.\u201d<\/p>\n<p>There can be an alarming streak of defeatism among the people responsible for making sure that these beasts don\u2019t end up eating us. During a number of conversations I had while reporting this story, my interlocutor would land on a sobering note of acquiescence to the perpetual inevitability of death and destruction that, while tragic, cannot be pinned on any single human. War is messy, technologies fail in unpredictable ways, and that\u2019s just that.<\/p>\n<figure id=\"attachment_12943\" aria-describedby=\"caption-attachment-12943\" style=\"width: 771px\" class=\"wp-caption aligncenter\"><img loading=\"lazy\" class=\"size-large wp-image-12943\" src=\"https:\/\/www.fie.undef.edu.ar\/ceptm\/wp-content\/uploads\/2023\/08\/Comp-1-771x1024.webp\" alt=\"\" width=\"771\" height=\"1024\" srcset=\"https:\/\/www.fie.undef.edu.ar\/ceptm\/wp-content\/uploads\/2023\/08\/Comp-1-771x1024.webp 771w, https:\/\/www.fie.undef.edu.ar\/ceptm\/wp-content\/uploads\/2023\/08\/Comp-1-226x300.webp 226w, https:\/\/www.fie.undef.edu.ar\/ceptm\/wp-content\/uploads\/2023\/08\/Comp-1-768x1020.webp 768w, https:\/\/www.fie.undef.edu.ar\/ceptm\/wp-content\/uploads\/2023\/08\/Comp-1-1157x1536.webp 1157w, https:\/\/www.fie.undef.edu.ar\/ceptm\/wp-content\/uploads\/2023\/08\/Comp-1.webp 1506w\" sizes=\"(max-width: 771px) 100vw, 771px\" \/><figcaption id=\"caption-attachment-12943\" class=\"wp-caption-text\">YOSHI SODEOKA<\/figcaption><\/figure>\n<p>\u201cIn warfighting,\u201d says Bowman of Palantir, \u201c[in] the application of any technology, let alone AI, there is some degree of harm that you\u2019re trying to\u2014that you have to accept, and the game is risk reduction.\u201d<\/p>\n<p>It is possible, though not yet demonstrated, that bringing artificial intelligence to battle may mean fewer civilian casualties, as advocates often claim. But there could be a hidden cost to irrevocably conjoining human judgment and mathematical reasoning in those ultimate moments of war\u2014a cost that extends beyond a simple, utilitarian bottom line. Maybe something just cannot be right, should not be right, about choosing the time and manner in which a person dies the way you hail a ride from Uber.<\/p>\n<p>To a machine, this might be suboptimal logic. But for certain humans, that\u2019s the point. \u201cOne of the aspects of judgment, as a human capacity, is that it\u2019s done in an open world,\u201d says Lucy Suchman, a professor emerita of anthropology at Lancaster University, who has been writing about the quandaries of human-machine interaction for four decades.<\/p>\n<p>The parameters of life-and-death decisions\u2014knowing the meaning of the fresh laundry hanging from a window while also wanting your teammates not to die\u2014are \u201cirreducibly qualitative,\u201d she says. The chaos and the noise and the uncertainty, the weight of what is right and what is wrong in the midst of all that fury\u2014not a whit of this can be defined in algorithmic terms. In matters of life and death, there is no computationally perfect outcome. \u201cAnd that\u2019s where the moral responsibility comes from,\u201d she says. \u201cYou\u2019re making a judgment.\u201d<\/p>\n<p>The gunsight never pulls the trigger. The chatbot never pushes the button. But each time a machine takes on a new role that reduces the irreducible, we may be stepping a little closer to the moment when the act of killing is altogether more machine than human, when ethics becomes a formula and responsibility becomes little more than an abstraction. If we agree that we don\u2019t want to let the machines take us all the way there, sooner or later we will have to ask ourselves: Where is the line?<\/p>\n<\/div>\n<\/div>\n<\/div>\n<\/div>\n<\/div>\n<div><strong>Fuente:<\/strong> <a href=\"https:\/\/www.technologyreview.com\/2023\/08\/16\/1077386\/war-machines\/\" target=\"_blank\" rel=\"noopener\"><em>https:\/\/www.technologyreview.com<\/em><\/a><\/div>\n","protected":false},"excerpt":{"rendered":"<p>La IA se est\u00e1 abriendo camino en la toma de decisiones en la batalla. \u00bfQui\u00e9n tiene la culpa cuando algo sale mal? Entonces, no deber\u00eda&hellip; <\/p>\n","protected":false},"author":1,"featured_media":12941,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":[],"categories":[18,2,23],"tags":[],"_links":{"self":[{"href":"https:\/\/www.fie.undef.edu.ar\/ceptm\/index.php?rest_route=\/wp\/v2\/posts\/12940"}],"collection":[{"href":"https:\/\/www.fie.undef.edu.ar\/ceptm\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.fie.undef.edu.ar\/ceptm\/index.php?rest_route=\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.fie.undef.edu.ar\/ceptm\/index.php?rest_route=\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.fie.undef.edu.ar\/ceptm\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=12940"}],"version-history":[{"count":1,"href":"https:\/\/www.fie.undef.edu.ar\/ceptm\/index.php?rest_route=\/wp\/v2\/posts\/12940\/revisions"}],"predecessor-version":[{"id":12944,"href":"https:\/\/www.fie.undef.edu.ar\/ceptm\/index.php?rest_route=\/wp\/v2\/posts\/12940\/revisions\/12944"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.fie.undef.edu.ar\/ceptm\/index.php?rest_route=\/wp\/v2\/media\/12941"}],"wp:attachment":[{"href":"https:\/\/www.fie.undef.edu.ar\/ceptm\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=12940"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.fie.undef.edu.ar\/ceptm\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=12940"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.fie.undef.edu.ar\/ceptm\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=12940"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}