Eyes, JAPAN Blog > Autonomous Military Robots: A Step Towards Reducing Casualties or Mass Destruction

Autonomous Military Robots: A Step Towards Reducing Casualties or Mass Destruction

Angelita

この記事は1年以上前に書かれたもので、内容が古い可能性がありますのでご注意ください。

In this modern era, technology is being used in various field, including military. This can be seen from the continuous development of military robots, which are autonomous robots or remote-controlled mobile robots designed for military applications, from transport to search & rescue, and attack.

Robo-dog “Visual 60” at AUSA Convention

One of the latest achievements on the military robot development was presented at the annual convention of the Association of the U.S. Army (AUSA) held on October 11th until October 13th, 2021, where a robotics company showed off its sniper rifle-equipped robo-dog, known as “Vision 60”. The robot is packed with a built-in sniper rifle, which is capable of engaging targets from three-quarters of a mile away. This robot is capable of remote-controlled or autonomous operation, but it can only fire with permission from human, due to the current U.S. military policy on autonomous systems which prohibits automatic target engagement. This is great news for people who is opposing the use of autonomous military robots in war zones.

Pros and Cons

One of the biggest controversies about autonomous military robots is whether we should use them in war zones. Using autonomous robots does have some benefits, such as reducing the financial cost of a war, lowering the number of casualties, and also increasing efficiency. However, some people are opposing it as using autonomous military robots can result in the violation of the Rules of War (Jus in bello). Firstly, autonomous robots would not be able to properly differentiate soldiers and civilians, which can result in the loss of innocent people’s life and it is a violation to the rules regarding discrimination and noncombatant immunity, which states that soldiers can only attack other soldiers and not civilians. In addition, autonomous robots might use an excessive force to achieve their goal, which can create more damage than necessary, and thus, violating the proportionality rule.

Who is responsible for robot’s crime?

Aside from violating the Rules of War, there is another major problem regarding the use of autonomous robots, which is about who should be held responsible for the mistakes or crimes committed by the robots. In this case, I agreed with Robert Sparrow, who stated that there is no one who can take responsibility for autonomous robot’s crime. In my opinion, someone should be responsible for a crime and should be punished for it in order to prevent more people from committing the same crime. In the case of autonomous robot’s crime, there are several candidates who might be held responsible, the programmers, commander, or the robots themselves. However, I believe that none of them should be responsible. Firstly, when an autonomous robots commit a crime, it might not be a result of programmer’s mistake in making the program. It might be caused by some limitation of the system and also the unpredicted behavior of the robots, which are outside the control of the programmers. Secondly, as the robot’s behavior is autonomous and unpredictable, the commander should not be responsible because the robot’s actions might not be an order from the commander. And lastly, Robert Sparrow said that the autonomous robots cannot be responsible for its crime because robots cannot suffer from the punishment. He believed that the ability to suffer is necessary for someone to regret what they did, and thus, decided not to repeat the same mistake again. In robot’s case, because they cannot suffer or feel regret, there is no guarantee that they will not do the same crime again, so it is meaningless to punish them. There is an objection given by Lokhorst and Van Den Hoven, who stated that the ability to suffer is not necessary for responsibility. They believe that the main purpose of punishment is further prevention of crimes, and that purpose could be achieved without the robots having to suffer from the punishment. However, making the robots responsible or punishing the robots in any way is not enough to prevent further crimes from happening. For example, even if the robots that committed a crime was destroyed or reprogrammed in order to prevent it from doing the same mistake again, I believe that this does not affect the humans who used the robots, and thus, there might be some people who use this fact to create another robot with the same or similar program, enabling the robot to commit a crime on purpose. Even if the robot’s owner or creator knows that it is possible for the robots to commit a crime, because of the reasons mentioned above, they do not care because they will not be held responsible for the crime. As there is such loophole in this problem, there is no guarantee that punishing the robots could prevent further crimes.

As the use of technology grows and debate about such issues appeared, there are also some approaches made by the government to solve them, such as the U.S. military policy on autonomous systems. I believe this shows that even as technology is taking over a major part in human’s life, it will never change the fact that the humanities side of human cannot be replaced by technology, and there is just some aspect of our life that we, as humans, should be responsible for.

  • このエントリーをはてなブックマークに追加

Comments are closed.