StatGuy2000 said:
1. As to what is the difference between LAWS and other weapon technologies (e.g. land mines) -- from what I've read (particularly from Stuart Russell, who has written extensively on this topic) is the issue of whether we as humans want to have questions involving life and death decisions to be made automatically by machines, as opposed to having humans make the ultimate decisions. If we can articulate explicitly what we want or what outcomes we want, and thus explicitly have machines learn our desires, then there would be no problems.
Perhaps I should have omitted the first few words of the question: I'm aware that LAWS is different from a land mine in that LAWS can weigh facts and make decisions whereas a land mine is stimulus-response only. My question is:
How is this worse?
It may *feel* unsettling to think about a weapon that makes decisions, and I suppose people can pass laws/make rules based on whatever they want, but I would hope we would pass laws based on real risk and logic, not feelings alone. Otherwise, we may pass laws that make things worse instead of better.
Plainly:
A robot that makes poor decisions about what to kill is still better than a land mine that kills everything it comes in contact with.
2. You state that technology in warfare over the past 50 years has made war safer. But can the relatively lack of conflict really be laid primarily in improvements in technology?
You responded to something other than what I said, even after correctly paraphrasing it:
1. Safer war.
2. Less war.
Both are true, but I was referring to the first:
*Actual* wars are themselves less deadly for both sides when more advanced targeting technology is used.
The fact that wars are safer due to technology can be seen in the casualty rates of recent wars. Drilling down into specific examples, probably the biggest example is the use of precision guided bombs and cruise missiles instead of "dumb" bombs. These result in fewer bombing missions, which make war safer for the pilots and more accurate bomb strikes, which make war safer for civilians in the war zone. No longer do you have to level an entire city block to destroy one building.
Now it may also be true that there are fewer wars in part because of technology, but that isn't what I was after because it is a secondary effect.
3. Finally, circling back to point #1, you point out a future where robots fight other robots, and humans aren't put at risk. Can we be really certain that robots can understand not to put humans at risk? After all, if a robot is programmed to neutralize any threat to the nation, why not, say, kill every single human in the opposing side?
If there are no humans in the warzone, the robots cannot kill any humans. We're already doing our half: our drone pilots are often halfway around the world from the battles they are fighting in. There is no possibility of them being killed in the battles they are fighting. We are not far from drone vs drone combat and the next step would be robot vs robot combat.
We do already have that in some sense, with LAWS type weapons on ships attacking other autonomous weapons such as cruise missles. There is no reason why humans couldn't be taken off the ships (unmanned ships are at least on the drawing board) and then you have robots attacking robots, and humans directing them at least in part from air conditioned offices thousands of miles away.
In terms of the laws of war, indescriminate killing of civilans is already illegal so it seems to me that banning LAWs type weapons is cutting off your nose to spite your face: eliminating something better because it may be misused and turned into something worse...that is alreay illegal.
E.G.: landmines are restricted because the potential harm is an inherrent feature of the technology. LAWs would be banned because of misuse or malfuction even though the intended use of the technology is a societal benefit.