StatGuy2000 said:
The very points you raise about the potential benefits of LAWS are addressed by Stuart Russell in his article in the World Economic Forum web article below.
From the article.
Imagine turning on the TV and seeing footage from a distant urban battlefield, where robots are hunting down and killing unarmed children. It might sound like science fiction – a scene from Terminator, perhaps. But unless we take action soon, lethal autonomous weapons systems – robotic weapons that can locate, select and attack human targets without human intervention – could be a reality. [emphasis added]
[sigh] I can imagine lots of things. I can imagine bubblegum trees and lollypop fairies. But just because you can imagine something doesn't make it
reasonable. Movies are becoming so
visually realistic that people are losing track of the line between imagination and reality.
Nowhere in the article does he discuss how robots/computers/ai are actually used today in war and how - realistically - they might be intended to be used in the future. Basing policy on fantasy instead of reality is a sure recipe for bad policy.
I can think of two relatively recent examples where LAWS could have or should have been employed to improve the outcome of an engagement:
The USS Stark was heavily damaged by an Iraqi Exocet missle. The ship was equipped with a LAWS type system: the Phalanx CIWS, which was not enabled during the attack due to human error (it is generally off and is only turned on when needed).
The USS Vincennes shot down an Iranian airliner because of a series of "tunnel vision" type errors that led the crew to believe it was a fighter flying an attack profile. I believe our AEGIS warships are capable of autonomous warfighting operation and a computer certainly would not have made that series of mistakes.
Computers can be programmed to do evil, but they are largely immune to poor decision-making. I think in general the expanded use of computers in decision-making would improve decision making further, not reduce it.
Again: people fear ceding control to computers in all sorts of situations, but these fears are not rational and not borne out by case studies. Problems due to poor computer decision-making are relatively rare and the closest thing is usually poor human-computer interface causing the
humans to make the bad decisions (several recent plane crashes were caused in part by this).
Wait, my imagination is tingling: maybe we should fear airplane autopilots going roque and purposely crashing airplanes into buildings too?
The article gets somewhat better as it goes along, but not enough (while it is common for clickbait, it is not a good journalistic practice to start an article ridiculous and then improve: you set your first impression and it is hard to break):
article said:
But some robotics experts, such as Ron Arkin, think that lethal autonomous weapons systems could actually reduce the number of civilian wartime casualties. The argument is based on an implicit ceteribus paribus assumption that, after the advent of autonomous weapons, the specific killing opportunities – numbers, times, locations, places, circumstances, victims – will be exactly the same as would have occurred with human soldiers.
This is rather like assuming cruise missiles will only be used in exactly those settings where spears would have been used in the past.
It is indeed the logic, and it is imperfect but real logic with real historical precedent, as I pointed out above: smart weapons such as cruise missiles and laser/gps guided bombs replaced saturation bombing and reduced military and civilian casualties in those situations as a result. That's a fact. We don't have to imagine it. It actually happened.
article said:
Moreover, it seems unlikely that military robots will always have their “humanitarian setting” at 100%. One cannot consistently claim that the well-trained soldiers of civilized nations are so bad at following the rules of war that robots can do better...
This is more of his bias that causes him to miss the point. He sees war as just evil and as a result is incapable of getting inside the minds of actual soldiers. Good nations doing evil isn't being claimed by proponents of LAWS. What is claimed - what actually happens - is that robots (computers) make better and more accurate decisions than humans and as a result, the good guys will fight cleaner/safer wars, as is already happening.
...while at the same time claiming that rogue nations, dictators and terrorist groups are so good at following the rules of war that they will never use robots in ways that violate these rules.
And nobody claims that either. But this is just the "all weapons are bad so we should ban all weapons" logic that doesn't work anywhere. It doesn't work any better for nukes than it works for knives. You cannot stop technology. You cannot stop weapons from being developed and used except by making better weapons to win the wars against the bad guys in order to convince them they shouldn't use their weapons; as
@Monsterboy correctly pointed out, the technological/capabilities disparity is likely part of the reason the number and scope of wars is decreasing. In that sense, robot warfare would simply increase that dispartiy, further reducing the number
and severity of wars.
Major world powers stopped using chemical weapons because we recognize they are bad. Syria used them recently because they corectly predicted they wouldn't be punished for using them. And chemical weapons are low-tech and relatively cheap. Worrying about roque nations using advanced killer robots when they are still using 1910s technology makes little sense.
So again; unlike chemical weapons or land mines, which are incapable of identifying and sparing civilians, LAWS type systems have no inherrent feature that makes them dangerous to civilians. Their danger is the same as any weapon's danger: that a roque will use it to harm civilians. Thus they are already covered adequatly under existing law, thus they need no special laws to cover them.