Anyone else a bit concerned with autonomized weapons?

  • Thread starter dipole
  • Start date
  • Tags
    Bit
In summary, the video discusses how a machine gun like this could be used in terrorism, and how it would be much easier to build and use than drones or land mines. It also points out that the technology is not very far off from being available to the average person, and that eventually this kind of machine might roam the streets freely shooting people.
  • #36
gleem said:
The Future of Life Institute and Stuart Russell professor of Computer Science at Berkeley produced a video of the possibilities of autonomous weapons in this case a microbot swarms.


Well that's just terrifying.
 
  • Like
Likes bhobba
Physics news on Phys.org
  • #37
I posted a fictitious video in post # 26 depicted the use of microbot swarms for useful ? and nefarious purposes to eliminate undesirable elements ( my characterization for those who for whatever reason are identified for elimination). I call you attention to this DARPA website discussing its research project OFFSET, OFFensive Swarm Effective Tactics. As you can see we are on our way to developing that which was depicted in the video.
 
  • #38
256bits said:
Presently, though, a merger to produce a killer robot has insufficiencies, as there is no specification for a particular target, just a random selection of targets within range, which may or may not have the desired effect. Give it time.

The military can put a robot like that in the middle of an area that you know is swarming with enemy troops and no hostages to kill.
 
  • #39
dipole said:
I wonder how soon it will be before one of these is fitted with a machine gun.
If the R&D of such a thing is complete and operating it costs less than soldiers who do the same thing then it is definitely in the future. Unless countries sign some kind of treaty like the ones dealing with chemical and biological weapons to not to combine autonomous robots and guns.
 
Last edited:
  • #40
russ_watters said:
Please be more reasonable. The *most* expensive military drones cost a few tens of millions, not hundreds of millions (Google tells me a Reaper costs $16 million). Serious attacks by amateurs are already happening and probably cost only a few thousand dollars:
https://globalriskinsights.com/2018/01/swarm-drone-attack-syria-uav/

To me, this is a much more serious threat.

Yes, there is a huge difference and you're on the wrong side of it! To be frank, I think you're losing sight of just how difficult a time we're having making terrestrial robots. We are *not* close to Terminator style robots. We *already have* low-cost, autonomous drone attacks.

Ehem. A working robot that took 15 seconds, with some struggle, to open a door. It is far from ready to attach a gun to it to become a killing machine. If you're running from it you'd be out the next door before it finishes opening that one!

And people are already shooting guns from drones too:



I never said this style of robotics is somehow a worse threat than drones. Yes, current modern day drone technology already poses a big threat. Pretty much anybody could buy a drone for a few hundred dollars, rig it with some kind of explosives, and fly it kamikaze style into a target and probably get away with it. The main limitation with small drones is, as far as I know, they can't support large payloads. They also can't easily penetrate buildings or navigate dense environments (one bump into a wall or tree and they essentially crash).

This ground-based quadruped locomotion is very robust. We've all seen the videos of BD trying to kick these things over, and how easily they recover. I worry about a robot with animal-level agility that can support large payloads (ammo, armour, weaponry), and can autonomously navigate environments that normally only people can, but unlike people would require military-grade weapons to stop.

Yes, it has a ways to go before it can actually do things like run around a building, but given the progress over the past five years or so (just look at the progression on Boston Dynamic's youtube page), I don't see any reason that won't be achieved in the near future.
 
  • #41
I'm more worried about the people that have this technology
 
  • Like
Likes russ_watters
  • #42
dipole said:
The main limitation with small drones is, as far as I know, they can't support large payloads. They also can't easily penetrate buildings or navigate dense environments (one bump into a wall or tree and they essentially crash).

Small less than 1m in diameter, 12 Kg payloads how much does one need, 30 minute flight times speeds up to 44 mph. and collision avoidance all right now also self mapping their environment no need of GPS. How about tracking cell phones. Most building have windows and drones can work in packs if necessary for complex operations. With possible drone threats we may not see drone delivery like Amazon or UPS wants.

Object avoidance drone video


Maybe the technology is not yet perfect but it is getting close and It's only a matter of time before all these elements are put together in various perfect mission specific drones. They can wait in ambush until their target appears or wait until one uses their cell phone, If they know where you will be what can stop them?
.
.Lethal Autonomous Weapon Systems (LAWS) is considered the third weapons revolution proceeded by gunpowder and nuclear weapons.

Monsterboy said:
Unless countries sign some kind of treaty like the ones dealing with chemical and biological weapons to not to combine autonomous robots and guns.

Apparently there is a UN conference on Certain Conventional Weapons that is looking into the possibility of drafting a treaty to outlaw LAWS. For a reveiw of a meeting in the Spring 2016 and a review of some then currently available capabilities and programs both government and civilian working on autonomous drones see the somewhat lengthy report https://www.buzzfeed.com/sarahatopo...-killer-robots?utm_term=.qrK829mxr#.queWXvKzo. and remember this was two years ago.

Grands said:
I'm more worried about the people that have this technolog

Basically everybody from the US to China, Russia, to Israel and the UK and in 2016 five more.
 
  • #43
Greg Bernhardt said:
I suppose only once AI surpasses human intelligence. Essentially a Terminator.
I don't think so, you need a robot which is much more agile and quicker than a human in the terrain and very fast and accurate in it's shooting. You don't need human level general intelligence.
 
  • #44
gleem said:
Basically everybody from the US to China, Russia, to Israel and the UK and in 2016 five more.
I was speaking about the people that run the country that have that weapons
 
  • #45
I say most of the technology concerned people when it was invented by having a potential to be used as a weapon. So this is nothing new.
 
  • #46
Grands said:
I was speaking about the people that run the country that have that weapons

I worded my response poorly and was thinking of the leadership and their advisers of those countries.
 
  • #47
HAYAO said:
I say most of the technology concerned people when it was invented by having a potential to be used as a weapon. So this is nothing new.
I would say that most of the technology was invented for military use and then transformed in civil use.
 
  • #48
gleem said:
Lethal Autonomous Weapon Systems (LAWS) is considered the third weapons revolution proceeded by gunpowder and nuclear weapons.
...
Apparently there is a UN conference on Certain Conventional Weapons that is looking into the possibility of drafting a treaty to outlaw LAWS. For a reveiw of a meeting in the Spring 2016 and a review of some then currently available capabilities and programs both government and civilian working on autonomous drones see the somewhat lengthy report https://www.buzzfeed.com/sarahatopo...-killer-robots?utm_term=.qrK829mxr#.queWXvKzo. and remember this was two years ago.
Admittedly I only read part of the article since it is long, but I'm not seeing the logic here. How is LAWS logically different - and worse - than a land mine that it represents either a revolution or a threat worthy of being banned?

Weapons have been banned in the past largely due to excessive cruelty (chemical weapons, phosophorous), not for being good at their jobs. Or, more commonly, their use has been restricted in obvious ways: you can't use them where they put civilians at risk.

Technology in warfare over the past 50 years has, primarily, brought us one thing: war is safer. Fewer people are dying, on both sides of conflicts. Heck, I can forsee a future where we send robots to fight other robots and humans aren't even put at risk. Isn't that a good thing, not a bad thing?

Simply: why is this bad and can anyone explain why it should be banned?

Note: this question applies to war only, not terrorism.
 
  • #49
russ_watters said:
Admittedly I only read part of the article since it is long, but I'm not seeing the logic here. How is LAWS logically different - and worse - than a land mine that it represents either a revolution or a threat worthy of being banned?

Weapons have been banned in the past largely due to excessive cruelty (chemical weapons, phosophorous). Or, more commonly, their use has been restricted in obvious ways: you can't use them where they put civilians at risk.

Technology in warfare over the past 50 years has, primarily, brought us one thing: war is safer. Fewer people are dying, on both sides of conflicts. Heck, I can forsee a future where we send robots to fight other robots and humans aren't even put at risk. Isn't that a good thing, not a bad thing?

Simply: why is this bad and can anyone explain why it should be banned?

Note: this question applies to war only, not terrorism.

In response to your questions above, let me address the following:

1. As to what is the difference between LAWS and other weapon technologies (e.g. land mines) -- from what I've read (particularly from Stuart Russell, who has written extensively on this topic) is the issue of whether we as humans want to have questions involving life and death decisions to be made automatically by machines, as opposed to having humans make the ultimate decisions. If we can articulate explicitly what we want or what outcomes we want, and thus explicitly have machines learn our desires, then there would be no problems.

The crux is -- we as humans don't always know what we want, nor are we necessarily have always been good at explicitly outlining what we want in such a way that doesn't lead to misunderstandings or unforeseen consequences. If we transfer decisions about life and death matters over to machines, can be certain that the decisions be made will be in the best interest of humanity? Russell in a number of Youtube videos (search under "Stuart Russell AI") makes the analogy of the genie in the bottle, where the genie grants the 3 wishes, and the human wants to have the last wish be the ability to undo the previous 2 wishes.

2. You state that technology in warfare over the past 50 years has made war safer. But can the relatively lack of conflict really be laid primarily in improvements in technology? Yes, the fear of nuclear annihilation has made some nations more reluctant in the use of say, nuclear weapons, but that is hardly what one would call a positive example of technology leading to lower mortality in conflict. Also, one can argue that international institutions that were built after World War II (the United Nations, including the Security Council, and the conflict resolution mechanisms built upon them) as well as the prominence of the United States as the pre-eminent superpower and "world's police" might have played a much more important role in that fewer people are dying.

3. Finally, circling back to point #1, you point out a future where robots fight other robots, and humans aren't put at risk. Can we be really certain that robots can understand not to put humans at risk? After all, if a robot is programmed to neutralize any threat to the nation, why not, say, kill every single human in the opposing side? This is pertinent because in essence robots could potentially make decision about life and death decisions without direct human input -- can or should we trust the robots to make such decisions for us?
 
  • #50
StatGuy2000 said:
1. As to what is the difference between LAWS and other weapon technologies (e.g. land mines) -- from what I've read (particularly from Stuart Russell, who has written extensively on this topic) is the issue of whether we as humans want to have questions involving life and death decisions to be made automatically by machines, as opposed to having humans make the ultimate decisions. If we can articulate explicitly what we want or what outcomes we want, and thus explicitly have machines learn our desires, then there would be no problems.
Perhaps I should have omitted the first few words of the question: I'm aware that LAWS is different from a land mine in that LAWS can weigh facts and make decisions whereas a land mine is stimulus-response only. My question is: How is this worse?

It may *feel* unsettling to think about a weapon that makes decisions, and I suppose people can pass laws/make rules based on whatever they want, but I would hope we would pass laws based on real risk and logic, not feelings alone. Otherwise, we may pass laws that make things worse instead of better.

Plainly:
A robot that makes poor decisions about what to kill is still better than a land mine that kills everything it comes in contact with.
2. You state that technology in warfare over the past 50 years has made war safer. But can the relatively lack of conflict really be laid primarily in improvements in technology?
You responded to something other than what I said, even after correctly paraphrasing it:

1. Safer war.
2. Less war.

Both are true, but I was referring to the first:
*Actual* wars are themselves less deadly for both sides when more advanced targeting technology is used.

The fact that wars are safer due to technology can be seen in the casualty rates of recent wars. Drilling down into specific examples, probably the biggest example is the use of precision guided bombs and cruise missiles instead of "dumb" bombs. These result in fewer bombing missions, which make war safer for the pilots and more accurate bomb strikes, which make war safer for civilians in the war zone. No longer do you have to level an entire city block to destroy one building.

Now it may also be true that there are fewer wars in part because of technology, but that isn't what I was after because it is a secondary effect.
3. Finally, circling back to point #1, you point out a future where robots fight other robots, and humans aren't put at risk. Can we be really certain that robots can understand not to put humans at risk? After all, if a robot is programmed to neutralize any threat to the nation, why not, say, kill every single human in the opposing side?
If there are no humans in the warzone, the robots cannot kill any humans. We're already doing our half: our drone pilots are often halfway around the world from the battles they are fighting in. There is no possibility of them being killed in the battles they are fighting. We are not far from drone vs drone combat and the next step would be robot vs robot combat.

We do already have that in some sense, with LAWS type weapons on ships attacking other autonomous weapons such as cruise missles. There is no reason why humans couldn't be taken off the ships (unmanned ships are at least on the drawing board) and then you have robots attacking robots, and humans directing them at least in part from air conditioned offices thousands of miles away.

In terms of the laws of war, indescriminate killing of civilans is already illegal so it seems to me that banning LAWs type weapons is cutting off your nose to spite your face: eliminating something better because it may be misused and turned into something worse...that is alreay illegal.

E.G.: landmines are restricted because the potential harm is an inherrent feature of the technology. LAWs would be banned because of misuse or malfuction even though the intended use of the technology is a societal benefit.
 
Last edited:
  • #51
While we sit here debating the dangers of LAWS the military at least in the US may be divided on its near term implementation. Although the Navy has begun testing and deployment of the Sea Hunter class of anti submarine autonomous unmanned surface vessels but at the same time has reduced the mission goals of the X47B autonomous aircraft from surveillance,and strike, to a refueling role even though it has proved its capabilities. The air force does not appear to have much of an interest in autonomous aircraft programs instead investing heavily in the manned F35 and trying to resurrect the F22, Darpa is playing with drone swarms. There is some concern that the best and the brightest AI experts and roboticists are working for Google, Amazon, Facebook and Apple. Commercial players may make the application breakthroughs before or for the military.
 
  • #52
gleem said:
While we sit here debating the dangers of LAWS the military at least in the US may be divided on its near term implementation. Although the Navy has begun testing and deployment of the Sea Hunter class of anti submarine autonomous unmanned surface vessels but at the same time has reduced the mission goals of the X47B autonomous aircraft from surveillance,and strike, to a refueling role even though it has proved its capabilities. The air force does not appear to have much of an interest in autonomous aircraft programs instead investing heavily in the manned F35 and trying to resurrect the F22, Darpa is playing with drone swarms. There is some concern that the best and the brightest AI experts and roboticists are working for Google, Amazon, Facebook and Apple. Commercial players may make the application breakthroughs before or for the military.
Unfortunately, military brass are still humans and subject to normal human failings; namely, putting themselves above the mission. A 24-year old quote from "[Lockheed] Skunk Works" about the Sea Shadow stealth ship:
"Our ship had a 4-man crew...By contrast, a frigate doing a similar job had more than 300 crewmen...A future commander resented having only a 4-man crew to boss around...in terms of an officer's future status and promotion prospects, it was about as glamorous as a tugboat."

...which isn't to say that all of these decisions are made by the military: some are made by Congress for reasons also having little to do with warfighting capability.
 
  • #53
Grands said:
I would say that most of the technology was invented for military use and then transformed in civil use.
This is simply not true.
 
  • #54
russ_watters said:
Technology in warfare over the past 50 years has, primarily, brought us one thing: war is safer.

Really ? I thought technology has made wars between nation states almost impossible, because you know, all the nations involved risk ending up as piles of radioactive rubble.

russ_watters said:
Fewer people are dying, on both sides of conflicts.
That's because direct conflicts between large, roughly equally matched nation states have become very uncommon.

russ_watters said:
Heck, I can forsee a future where we send robots to fight other robots and humans aren't even put at risk. Isn't that a good thing, not a bad thing ?
Well, as long as nuclear weapons are here, you can't attack a country with an army of robots even if that country's robots are less advanced and can be easily be beaten by yours, because the attacked country can resort to the use of nuclear weapons in desperation.

If a war breaks out between your country and a country without nuclear weapons, then that country will definitely put humans on the front lines after exhausting all it's robot soldiers, so you end up killing people anyway.

Lastly, if a war breaks out between countries who are equally matched in their military robots technology and nuclear weapons, a robot vs robot war will be good entertainment for all the people to watch. Unfortunately it will be a huge waste of resources, the respective countries can avoid this by building the best chess engines they possibly can and compete against each other, which will also be good entertainment for chess players and other people in general.

russ_watters said:
Admittedly I only read part of the article since it is long, but I'm not seeing the logic here. How is LAWS logically different - and worse - than a land mine that it represents either a revolution or a threat worthy of being banned?

Weapons have been banned in the past largely due to excessive cruelty (chemical weapons, phosophorous), not for being good at their jobs. Or, more commonly, their use has been restricted in obvious ways: you can't use them where they put civilians at risk.

Simply: why is this bad and can anyone explain why it should be banned?

Landmines are mostly good at stopping an invading force i.e they are defensive in nature. LAWS can be used offensively to capture territory.

russ_watters said:
Note: this question applies to war only, not terrorism.
In the future, chances of a direct war between nation states is only going to decrease, so whether or not you like it, the conflict between nations and terrorist groups are the only conflicts that are likely to continue. If the technology proliferates then terrorists will find it easier to attack nation states by replacing suicide bombers and shooters with agile and fast moving robots.
 
Last edited:
  • #55
russ_watters said:
Perhaps I should have omitted the first few words of the question: I'm aware that LAWS is different from a land mine in that LAWS can weigh facts and make decisions whereas a land mine is stimulus-response only. My question is: How is this worse?

It may *feel* unsettling to think about a weapon that makes decisions, and I suppose people can pass laws/make rules based on whatever they want, but I would hope we would pass laws based on real risk and logic, not feelings alone. Otherwise, we may pass laws that make things worse instead of better.

If there are no humans in the warzone, the robots cannot kill any humans. We're already doing our half: our drone pilots are often halfway around the world from the battles they are fighting in. There is no possibility of them being killed in the battles they are fighting. We are not far from drone vs drone combat and the next step would be robot vs robot combat.

We do already have that in some sense, with LAWS type weapons on ships attacking other autonomous weapons such as cruise missles. There is no reason why humans couldn't be taken off the ships (unmanned ships are at least on the drawing board) and then you have robots attacking robots, and humans directing them at least in part from air conditioned offices thousands of miles away.

In terms of the laws of war, indescriminate killing of civilans is already illegal so it seems to me that banning LAWs type weapons is cutting off your nose to spite your face: eliminating something better because it may be misused and turned into something worse...that is alreay illegal.

E.G.: landmines are restricted because the potential harm is an inherrent feature of the technology. LAWs would be banned because of misuse or malfuction even though the intended use of the technology is a societal benefit.

The very points you raise about the potential benefits of LAWS are addressed by Stuart Russell in his article in the World Economic Forum web article below.

https://www.weforum.org/agenda/2016/01/robots-in-war-the-next-weapons-of-mass-destruction

I will quote his responses in the above article which address your points above, as he explains this in ways far better than I can

"We might think of war as a complete breakdown of the rule of law, but it does in fact have its own legally recognized codes of conduct. Many experts in this field, including Human Rights Watch, the International Committee of the Red Cross and UN Special Rapporteurhttp://wilpf.org/killer-robots-and-the-human-rights-council/, have questioned whether autonomous weapons could comply with these laws. Compliance requires subjective and situational judgements that are considerably more difficult than the relatively simple tasks of locating and killing – and which, with the current state of artificial intelligence, would probably be beyond a robot.

Even those countries developing fully autonomous weapons recognize these limitations. In 2012, for example, the US Department of Defense issued a directive stating that such weapons must be designed in a way that allows operators to “exercise appropriate levels of human judgement over the use of force”. The directive specifically prohibits the autonomous selection of human targets, even in defensive settings."

And for your specific claim that armies of robots will keep humans out of the loop entirely in warfare:

"
But some robotics experts,http://www.unog.ch/80256EDD006B8954/(httpAssets)/54B1B7A616EA1D10C1257CCC00478A59/%24file/Article_Arkin_LAWS.pdf, think that lethal autonomous weapons systems could actually reduce the number of civilian wartime casualties. The argument is based on an implicit ceteribus paribus assumption that, after the advent of autonomous weapons, the specific killing opportunities – numbers, times, locations, places, circumstances, victims – will be exactly the same as would have occurred with human soldiers.

This is rather like assuming cruise missiles will only be used in exactly those settings where spears would have been used in the past. Obviously, autonomous weapons are completely different from human soldiers and would be used in completely different ways – for example, as weapons of mass destruction.

Moreover, it seems unlikely that military robots will always have their “humanitarian setting” at 100%. One cannot consistently claim that the well-trained soldiers of civilized nations are so bad at following the rules of war that robots can do better, while at the same time claiming that rogue nations, dictators and terrorist groups are so good at following the rules of war that they will never use robots in ways that violate these rules."

The article also goes at length to describe how the superiority of autonomous weapons relate to their scalability, which destructive properties similar to biological weapons, tipping the balance away from legitimate states and toward terrorists, criminals and other non-state actors. There is also the issue of strategic instability (basically, LAWS must be highly adaptable to be useful, but this very adaptability makes them highly unpredictable and difficult to control.
 
  • #56
Monsterboy said:
Really ? I thought technology has made wars between nation states almost impossible, because you know, all the nations involved risk ending up as piles of radioactive rubble.
Like Statguy, you're responding to point 2, and I was making point 1.
That's because direct conflicts between large, roughly equally matched nation states have become very uncommon.
Point #1 is true even for wars between a superpower and a non-superpower, which is what most wars are. This point is closer but still more focused on #2 than really addressing #1 directly.

The way we fought in the Gulf and Afghanistan was different from the way we fought in Vietnam -- and the transition began in vietnam, roughly in the middle of it. This is not controversial:
Although the Norden bombsight in World War II increased the accuracy of dropped munitions, bombing had more-or-less remained the same art for the previous 50 years: a “shotgun” approach to accuracy that relied on mass and coverage. The measure of an accuracy of a bomb, known as its CEP or “circular area of probability”, went from hundreds of feet to tens of feet or less. According to the Department of Defense, the Air Force’s use of laser-guided systems would explode, because of the benefits of “fewer bombs, fewer airplanes, lower collateral damage, and increased pinpoint destruction accuracy.”
[emphasis added]
https://understandingempire.wordpress.com/2014/01/15/the-laser-age-in-the-vietnam-war/

You can count civilian deaths or military deaths vs size of forces employed and durations. It is safer to participate in a war on either side where smart weapons are used than it was 50 years ago to participate in a war on either side where smart weapons were not used.
Landmines are mostly good at stopping an invading force i.e they are defensive in nature. LAWS can be used offensively to capture territory.
Agreed. So what is your point? Are you saying this makes LAWs worse somehow? Why? Do you understand why landmines are considered problematic?
In the future, chances of a direct war between nation states is only going to decrease, so whether or not you like it, the conflict between nations and terrorist groups are the only conflicts that are likely to continue. If the technology proliferates then terrorists will find it easier to attack nation states by replacing suicide bombers and shooters with agile and fast moving robots.
I agree with the first part and disagree with the second, at least in the context of the OP. Terrestrial robots are complicated, technologically advanced and expensive, all things problematic for terrorists. The most successful terrorist attack in history used box cutters. Most terrorist attacks use crude improvised explosives. If sophisticated conventional weapons are outside their capabilities, then advanced weapons are just futher outside their capabilities.

The Las Vegas hotel/concert shooting was one guy with a cache of small arms. It would have beem much more effective if he used a small, mounted machine gun. Much more effective still if he had used an automated LAWS system. So why didn't he?

People fear what "might" happen, and I'm asking you guys to put some real, logical thought into why, if it is so easy, it hasn't happened yet. The answer really is simple: imagining something to fear is easy. Actually doing it isn't necessarily easy.
 
Last edited:
  • #57
StatGuy2000 said:
The very points you raise about the potential benefits of LAWS are addressed by Stuart Russell in his article in the World Economic Forum web article below.
From the article.
Imagine turning on the TV and seeing footage from a distant urban battlefield, where robots are hunting down and killing unarmed children. It might sound like science fiction – a scene from Terminator, perhaps. But unless we take action soon, lethal autonomous weapons systems – robotic weapons that can locate, select and attack human targets without human intervention – could be a reality. [emphasis added]
[sigh] I can imagine lots of things. I can imagine bubblegum trees and lollypop fairies. But just because you can imagine something doesn't make it reasonable. Movies are becoming so visually realistic that people are losing track of the line between imagination and reality.

Nowhere in the article does he discuss how robots/computers/ai are actually used today in war and how - realistically - they might be intended to be used in the future. Basing policy on fantasy instead of reality is a sure recipe for bad policy.

I can think of two relatively recent examples where LAWS could have or should have been employed to improve the outcome of an engagement:

The USS Stark was heavily damaged by an Iraqi Exocet missle. The ship was equipped with a LAWS type system: the Phalanx CIWS, which was not enabled during the attack due to human error (it is generally off and is only turned on when needed).

The USS Vincennes shot down an Iranian airliner because of a series of "tunnel vision" type errors that led the crew to believe it was a fighter flying an attack profile. I believe our AEGIS warships are capable of autonomous warfighting operation and a computer certainly would not have made that series of mistakes.

Computers can be programmed to do evil, but they are largely immune to poor decision-making. I think in general the expanded use of computers in decision-making would improve decision making further, not reduce it.

Again: people fear ceding control to computers in all sorts of situations, but these fears are not rational and not borne out by case studies. Problems due to poor computer decision-making are relatively rare and the closest thing is usually poor human-computer interface causing the humans to make the bad decisions (several recent plane crashes were caused in part by this).

Wait, my imagination is tingling: maybe we should fear airplane autopilots going roque and purposely crashing airplanes into buildings too?

The article gets somewhat better as it goes along, but not enough (while it is common for clickbait, it is not a good journalistic practice to start an article ridiculous and then improve: you set your first impression and it is hard to break):
article said:
But some robotics experts, such as Ron Arkin, think that lethal autonomous weapons systems could actually reduce the number of civilian wartime casualties. The argument is based on an implicit ceteribus paribus assumption that, after the advent of autonomous weapons, the specific killing opportunities – numbers, times, locations, places, circumstances, victims – will be exactly the same as would have occurred with human soldiers.

This is rather like assuming cruise missiles will only be used in exactly those settings where spears would have been used in the past.
It is indeed the logic, and it is imperfect but real logic with real historical precedent, as I pointed out above: smart weapons such as cruise missiles and laser/gps guided bombs replaced saturation bombing and reduced military and civilian casualties in those situations as a result. That's a fact. We don't have to imagine it. It actually happened.
article said:
Moreover, it seems unlikely that military robots will always have their “humanitarian setting” at 100%. One cannot consistently claim that the well-trained soldiers of civilized nations are so bad at following the rules of war that robots can do better...
This is more of his bias that causes him to miss the point. He sees war as just evil and as a result is incapable of getting inside the minds of actual soldiers. Good nations doing evil isn't being claimed by proponents of LAWS. What is claimed - what actually happens - is that robots (computers) make better and more accurate decisions than humans and as a result, the good guys will fight cleaner/safer wars, as is already happening.
...while at the same time claiming that rogue nations, dictators and terrorist groups are so good at following the rules of war that they will never use robots in ways that violate these rules.
And nobody claims that either. But this is just the "all weapons are bad so we should ban all weapons" logic that doesn't work anywhere. It doesn't work any better for nukes than it works for knives. You cannot stop technology. You cannot stop weapons from being developed and used except by making better weapons to win the wars against the bad guys in order to convince them they shouldn't use their weapons; as @Monsterboy correctly pointed out, the technological/capabilities disparity is likely part of the reason the number and scope of wars is decreasing. In that sense, robot warfare would simply increase that dispartiy, further reducing the number and severity of wars.

Major world powers stopped using chemical weapons because we recognize they are bad. Syria used them recently because they corectly predicted they wouldn't be punished for using them. And chemical weapons are low-tech and relatively cheap. Worrying about roque nations using advanced killer robots when they are still using 1910s technology makes little sense.

So again; unlike chemical weapons or land mines, which are incapable of identifying and sparing civilians, LAWS type systems have no inherrent feature that makes them dangerous to civilians. Their danger is the same as any weapon's danger: that a roque will use it to harm civilians. Thus they are already covered adequatly under existing law, thus they need no special laws to cover them.
 
  • Like
Likes HAYAO
  • #58
Just a note in case it isn't obvious:
We are in large part talking past each other. My position is that this technology will continue to enable *us* to fight wars better. Without responding to my position, the responses I'm getting are "this will enable our adversaries to fight wars worse".

Even if the second position is true, it does not address, much less negate, the first position.
 
  • #59
Sorry for the multiple posts, but I feel like I need to point out that this argument from the article is logically void (empty):
But some robotics experts, such as Ron Arkin, think that lethal autonomous weapons systems could actually reduce the number of civilian wartime casualties. The argument is based on an implicit ceteribus paribus assumption that, after the advent of autonomous weapons, the specific killing opportunities – numbers, times, locations, places, circumstances, victims – will be exactly the same as would have occurred with human soldiers.

This is rather like assuming cruise missiles will only be used in exactly those settings where spears would have been used in the past. Obviously, autonomous weapons are completely different from human soldiers and would be used in completely different ways – for example, as weapons of mass destruction.
The conclusion is just made-up and has no logical relationship with the premise. You can see it easily enough if you flip it over:

As stated (paraphrase):
1. Precision weapons could decrease collateral damage but will increase collateral damage by being used more or worse.

Flipping it:
2a. Indescriminate weapons could increase collateral damage but will decrease collateral damage by being used less and better.

I wouldn't argue #2 and I doubt the author would either, even though it is the exact inverse of the argument he's making. Instead, I suspect he'd just argue:
2b. Indescriminate weapons would increase collageral damage.

So in this "logic", all paths lead to more deaths. Thus the "logic" doesn't actually follow and the purpose of the argument isn't to find a logical conclusion, but to stalemate/handcuff the argument. This is what you get when you have an "all weapons are bad" worldview: you can't look at any other possibility than that any change will make things worse. You can't have a rational discusison with someone who thinks all paths - even those in exact opposite directions - lead to the same place.

And this is not a trivial problem with his argument because you can apply the argument retrospectively, to history and where we are today: Does he really want to go back to the age of carpet-bombing? Does he really believe people will be safer? Does he really not see that wars are safer today?

You guys may think that the issue of whether the enemy gets our technology is all that matters, but the issue of how technology is *actually* used by *our* soldiers/sailors in war needs to be addressed too. I was in the US Navy, stationed on a ship that was equipped with a LAWS system. What is being suggested here would mean removing those systems from ships and increasing the risk of sailors such as myself dying from attacks that could have been defended.
 
  • #60
russ_watters said:
Like Statguy, you're responding to point 2, and I was making point 1.

Ok, let's talk about point 1. Safer war

You are saying that robots attack robots and all the humans will be completely out of the battleground right ?

Well, this will only happen when all the countries possesses this technology to a roughly same degree, the reality is that different countries will be at different levels when it comes to developing and using LAWS, so in order to use robots in war(and not kill people) you will have to wait till all other countries catch up with you ! or else you will have a lot of blood on your hands, or rather on your robot's hands.

I think i covered this topic here.
Monsterboy said:
If a war breaks out between your country and a (less advanced)country without nuclear weapons, then that country will definitely put humans on the front lines after exhausting all it's robot soldiers, so you end up killing people anyway.

If we get to the point where the countries involved in a war are roughly equal in terms of developing LAWS, then wars are going to be much more prolonged and much more damaging to the economy and the environment, as there are no human lives involved and countries can endlessly manufacture these machines. But in war consisting of human soldiers, there will a huge pressure to not start a war in the first place and to stop the war as soon as possible because human lives will be at stake along with the damage to the economy and the environment.

russ_watters said:
Agreed. So what is your point? Are you saying this makes LAWs worse somehow? Why? Do you understand why landmines are considered problematic?
Well, a world full of countries with offensive capabilities is less stable than a world where most countries are possessing mostly defensive capabilities.

I think i understand why landmines are problematic.
1. once they are deployed, they are difficult to get rid off both for the host nation and the invading nation.
2. you don't usually get to decide whether they explode or not after being deployed.

But given the above points landmines do not give any country the power exert force on other countries.

russ_watters said:
Terrestrial robots are complicated, technologically advanced and expensive, all things problematic for terrorists. The most successful terrorist attack in history used box cutters. Most terrorist attacks use crude improvised explosives. If sophisticated conventional weapons are outside their capabilities, then advanced weapons are just futher outside their capabilities.

The Las Vegas hotel/concert shooting was one guy with a cache of small arms. It would have beem much more effective if he used a small, mounted machine gun. Much more effective still if he had used an automated LAWS system. So why didn't he?

People fear what "might" happen, and I'm asking you guys to put some real, logical thought into why, if it is so easy, it hasn't happened yet. The answer really is simple: imagining something to fear is easy. Actually doing it isn't necessarily easy.

The threat of terrorists using such robots is not an immediate one nor is it in the near future, I agree. Right now even major powers are yet use fully autonomous robots. My point is that only after autonomous military robots become a regular thing that terrorists can get their hands on it. If you think about it building a nuclear weapon will harder than building a robot because they won't need any "hard to get" materials that are required to build nukes.
 
  • #61
If I was a terrorist, I wouldn't be stupid enough to do terror attacks on civilians using smuggled/stolen autonomous military robots. For one, it would be easy to be countered by law enforcements and counter-terrorist military units, because they basically have the same, or even better. And two, it is not worth the effort and money for smuggling such thing, only to be destroyed for sure by counter-terrorist.

Sure, you can probably kill more with autonomous military robots than spraying your assault rifle with your friends, but bomb attacks are more efficient, simple, can be hidden well, and cheaper (and can be self-made).
 
  • #62
HAYAO said:
I wouldn't be stupid enough to do terror attacks on civilians using smuggled/stolen autonomous military robots. For one, it would be easy to be countered by law enforcements and counter-terrorist military units, because they basically have the same, or even better.
Then you are not an ambitious terrorist, look at the Taliban, the US and Afghan military are better equipped in every way but still it has proven to be quite difficult to eliminate the Taliban. Why is that ? Because such terrorists are not just thinking of blowing up buildings or cars, that is just their way of "sending a message" or "spreading fear", their real goal is to occupy territory and enforce their rules over other people. Combat robot can be a good tool to use for a guerrilla army, they can attack where they are least expected cause a lot of damage and run away i.e hit and run tactics, before being taken down by the "superior" counter-terrorist forces, even if they are "beaten" by counter terrorist forces, it wouldn't matter for them, why would it ? Robots can be sent on suicidal missions just like humans.
 
  • #63
Monsterboy said:
Then you are not an ambitious terrorist, look at the Taliban, the US and Afghan military are better equipped in every way but still it has proven to be quite difficult to eliminate the Taliban. Why is that ? Because such terrorists are not just thinking of blowing up buildings or cars, that is just their way of "sending a message" or "spreading fear", their real goal is to occupy territory and enforce their rules over other people. Combat robots can be a good tool to use for a guerrilla army, they can attack where they are least expected cause a lot of damage and run away i.e hit and run tactics, before being taken down by the "superior" counter-terrorist forces, even if they are "beaten" by counter terrorist forces, it wouldn't matter for them, why would it ? Robots can be sent on suicidal missions just like humans.
This misses my point. I am talking about terrorist attacks on civilians. Insurgents attacks are different. But if you want to argue this, then fine.

I am by no means a military personnel, but it's not that hard to imagine the difficulties in fighting Talibans and insurgents from the military perspective, unless you are ready for genocide (like in some countries).

The reason why it is difficult to wipe out Taliban is because of the rule of engagement. One cannot be certain whether one is an insurgent or not unless they either engage you first or you have good proof and intelligence that they are. This is especially true when firearms can be hidden rather easily in real combat, and people can camouflage like a normal civilian. But military robots? No, once you see a military robots in a place they aren't supposed to, then it's safe to judge they are enemies.
 
  • #64
In the near term say next 10 years it is very unlikely to have an autonomous anthropomorphic robot engaging humans or one another. AI is and will continue to be applied to weapon systems as ships, plane-drones, vehicles or as perimeter defense systems for surveillance , reconnaissance and probably used offensively autonomously until it engages the enemy.after which humans will probably pull the trigger. I can see in the near term the possibility of using truly autonomous weapons where because of the situation as a forward attacking force in which only enemy forces should be present there is a reduced need to verify the identity of a target. Sending a drone into a cave containing possible combatants. or enforcing curfews with "non lethal" force. In fact a Texas company called Chaotic Moon sells an autonomous surveillance drone (CUPID) that can taser a trespasser who refuses to leave. The company Taser is looking into developing similar devices for law enforcement.
 
  • Like
Likes Monsterboy
  • #65
HAYAO said:
This misses my point. I am talking about terrorist attacks on civilians. Insurgents attacks are different. But if you want to argue this, then fine.

I was talking about attacks on civilians as well.
HAYAO said:
I am by no means a military personnel, but it's not that hard to imagine the difficulties in fighting Talibans and insurgents from the military perspective, unless you are ready for genocide (like in some countries).
That's exactly why Terrorists + autonomous robots = even more trouble.

HAYAO said:
The reason why it is difficult to wipe out Taliban is because of the rule of engagement. One cannot be certain whether one is an insurgent or not unless they either engage you first or you have good proof and intelligence that they are. This is especially true when firearms can be hidden rather easily in real combat, and people can camouflage like a normal civilian. But military robots? No, once you see a military robots in a place they aren't supposed to, then it's safe to judge they are enemies.
What makes you think camouflaging military robots is going to be so difficult ?

https://www.theaustralian.com.au/news/world/scientists-build-robot-that-can-crawl-camouflage-itself-and-hide/news-story/4987d2f99fa111e5745f6aaba12513d9?sv=b493f1dd644fef0aec46f0527704f228
 
  • #66
Monsterboy said:
I was talking about attacks on civilians as well.

That's exactly why Terrorists + autonomous robots = even more trouble.What makes you think camouflaging military robots is going to be so difficult ?

https://www.theaustralian.com.au/news/world/scientists-build-robot-that-can-crawl-camouflage-itself-and-hide/news-story/4987d2f99fa111e5745f6aaba12513d9?sv=b493f1dd644fef0aec46f0527704f228
First, I think you are watching way too much movies and games. Second, you underestimate so many things.

These type of weapons are extremely pricey, especially if you are going to use camouglaging technologies. We are talking about millions of dollars each unit. Realistically speaking, how are they going to smuggle such weapon? How many times have terrorists smuggled fighter/attacker jets and effectively used it? How many times have terrorists smuggled humvees and effectively used it? Tanks? Can they maintain it? Can they utilize their full potential?

Smuggling weapons can only be done for things that are small enough, easily and well produced, and cheap enough for third-world nations to manufacture it. Rifles, rocket-propelled grenade launchers, light and heavy machineguns are basically the limit in size and price for smuggling with reasonable amount of numbers. If you are thinking one single good weapon can change the entire tide of a battle, you are terribly mistaken. You need tens and hundreds, sometimes even thousands to make it work.

You said they can be used for suicide missions. Do you really think terrorists are going to just send a machine to a suicide mission that they had such a hard time smuggling even one unit? That would be a very unwise decision to make and certainly not worth the cost.

Meanwhile you have bombs that can be hand-made. You have rifles that are available in a lot of places in the world. You have people that you can use for suicide missions. All of these don't cost much, and is practically much more useful than smuggling mil-spec military weapons.
AIs are much better decision makers than human. As russ_waters said in his posts, a lot of military errors during combat occur from poor-decision making on the human part. As such, AIs are better combatant in terms of identifying enemy and deciding whether or not to engage them. Using autonomous robots in guerrilla warfare where they have become widely available is going to be much less effective than when without them. Guerrilla warfare works because there is huge gap in preparation between one utilizing guerrilla warfare and one being engaged. They utilize the downside of human combatant: the inability to detect enemy presence in complex terrain and environment, poor-decision making, and lack of ability in parallel thinking and analyzing especially under stress. However, autonomous robots/AIs can significantly close this gap because the defender has more ways to detect and analyze an enemy before and after being engaged.
 
Last edited:
  • #67
HAYAO said:
First, I think you are watching way too much movies and games. Second, you underestimate so many things.
I wish i had enough time to do that. If you read my previous response to russ and contemplate on that you will find that I underestimate nothing. In fact you and others underestimate, misunderstand what terrorists are and what they aim for and their relationship with nation states.

HAYAO said:
These type of weapons are extremely pricey, especially if you are going to use camouglaging technologies. We are talking about millions of dollars each unit. Realistically speaking, how are they going to smuggle such weapon? How many times have terrorists smuggled fighter/attacker jets and effectively used it? How many times have terrorists smuggled humvees and effectively used it? Tanks? Can they maintain it? Can they utilize their full potential?

I repeat again terrorists are not just about blowing up cars and buildings. There is a complicated relationship between nation states and terrorist groups, there are nations which are known to share technologies and money with terrorists. For example the rebels in Ukraine are financed and armed by Russia, that is how they managed to bring down a passenger plane, remember that ?

Pakistan is believed to finance and arm terrorists who attack targets in Afghanistan (while being a ally of the US) and Indian controlled disputed region called Kashmir. If you go to the Middle east, the situation is even worse, several countries and many terrorist groups are involved in the violence, many of the countries in this region are known to support and arm certain terrorist groups in order to cause trouble in other countries.

HAYAO said:
Smuggling weapons can only be done for things that are small enough, easily and well produced, and cheap enough for third-world nations to manufacture it. Rifles, rocket-propelled grenade launchers, light and heavy machineguns are basically the limit in size and price for smuggling with reasonable amount of numbers. If you are thinking one single good weapon can change the entire tide of a battle, you are terribly mistaken. You need tens and hundreds, sometimes even thousands to make it work.

You said they can be used for suicide missions. Do you really think terrorists are going to just send a machine to a suicide mission that they had such a hard time smuggling even one unit? That would be a very unwise decision to make and certainly not worth the cost.

Meanwhile you have bombs that can be hand-made. You have rifles that are available in a lot of places in the world. You have people that you can use for suicide missions. All of these don't cost much, and is practically much more useful than smuggling mil-spec military weapons.

I am afraid all these points already covered in my previous response to russ here.
Monsterboy said:
The threat of terrorists using such robots is not an immediate one nor is it in the near future, I agree. Right now even major powers are yet use fully autonomous robots. My point is that only after autonomous military robots become a regular thing that terrorists can get their hands on it. If you think about it building a nuclear weapon will harder than building a robot because they won't need any "hard to get" materials that are required to build nukes.

HAYAO said:
AIs are much better decision makers than human. As russ_waters said in his posts, a lot of military errors during combat occur from poor-decision making on the human part. As such, AIs are better combatant in terms of identifying enemy and deciding whether or not to engage them.
What makes you think these robots will be concerned about their own survival ? They can choose to attack even if they won't make it out alive and in one piece. Robots can be made suicidal if cornered or damaged beyond recovery. Why is this hard to understand ?

Did the gunmen who attacked Paris and Mumbai planned to get out alive ? No, they wanted to cause as much damage as possible. Robots are capable of doing the same.

HAYAO said:
Guerrilla warfare works because there is huge gap in preparation between one utilizing guerrilla warfare and one being engaged. They utilize the downside of human combatant:
Yes, i know what guerrilla warfare means.
That's why they are good weapons to attack civilian targets and relatively weak military ones.
 
Last edited:
  • #68
Monsterboy said:
My point is that only after autonomous military robots become a regular thing that terrorists can get their hands on it.
I think you has some misconceptions regarding the realistic issues.

The price tag on the supposed 'military grade autonomous robots' will be quite high, since such stuff is expected to met the standards of military, at least on the level of a common soldier. All terrain, all weather, durable, low maintenance, long operation time, and what is the most important these times: able to satisfy all the armchair-generals, journalists and human right fighters of the world at the same time.

But practically none of that matters for a terrorists and/or for the neglected category of 'idiots'. So, the debate here actually should have two different questions.
  • what's it with autonomous weapons which can be accepted into the military?
  • What's it with cheap, crude homemade replicas/experiments/attempts of half- or full autonomized weapons and weapon platforms?
 
Last edited:
  • #69
Rive said:
The price tag on the supposed 'military grade autonomous robots' will be quite high, since such stuff is expected to met the standards of military, at least on the level of a common soldier. All terrain, all weather, durable, low maintenance, long operation time, and what is the most important these times: able to satisfy all the armchair-generals, journalists and human right fighters of the world at the same time.

Yes, the idea in this thread of the people who support LAWS is that we can completely remove humans from the battleground and also avoid civilian causalities i.e it will be a safe war, this means mass manufacturing of these robots, even if a single robot can replace 2 or 3 soldiers ,we are going to have a lot a robots. If the price tag doesn't allow this to happen then the whole point of a safe war is eliminated and we are going to have humans on the battleground.

I assumed that a "safe war" is only going to happen when or if the world's military powers find it affordable to replace all their human soldiers with autonomous robots, am i wrong ? That means the cost of maintaining an army,navy and airforce of autonomous robots has reduced to or is lesser than the cost of maintaining a human military force, when this happens why would it be impossible for terrorists who don't even maintain a regular army to acquire these weapon systems even if they are not the best available with/without state support ?

You yourself say that
Rive said:
...practically none of that matters for terrorists and/or for the neglected category of 'idiots'.

This means at some point in the future when a "safe war" can happen, terrorists who don't have to satisfy all the requirements you stated are going to spend much less money on their robots than what major military powers spend on theirs.

Rive said:
So, the debate here actually should have two different questions.
  • what's it with autonomous weapons which can be accepted into the military?
  • What's it with cheap, crude homemade replicas/experiments/attempts of half- or full autonomized weapons and weapon platforms?
At first they might appear to be two different questions but they are not entirely different because of the points i have mentioned above. Yes, terrorists are mostly going to use weapon systems which are primitive to the ones used by governments (just like now), that will be enough for them to attack civilians targets and weakly defended military ones, just like they are doing now.
 
Last edited:
  • #70
Monsterboy said:
That means the cost of maintaining an army,navy and airforce of autonomous robots has reduced to or is lesser than the cost of maintaining a human military force
No. The motive behind the migration toward robots is not about unit price, but about the acceptability of unit loss.
It's a shrug to accept the loss of a robot: however, it takes plenty of tears, loss of political support and many inconvenient questions to lose a soldier.

And there is nothing in progress what would suggest the required endless drop in unit manufacturing costs of advanced military hardware just due the mass production.

Monsterboy said:
At first they might appear to be two different questions
Till somebody manages to smuggle some armed T72's (or anything comparable) from Iraq to USA (or, in general: to the West) they ARE two different questions.
 

Similar threads

  • General Engineering
Replies
12
Views
650
  • General Discussion
2
Replies
49
Views
6K
  • General Discussion
Replies
6
Views
1K
  • General Discussion
Replies
4
Views
652
  • Sci-Fi Writing and World Building
Replies
7
Views
1K
  • Sci-Fi Writing and World Building
3
Replies
96
Views
6K
  • Sci-Fi Writing and World Building
3
Replies
84
Views
7K
  • Sci-Fi Writing and World Building
Replies
9
Views
2K
  • General Discussion
Replies
1
Views
8K
Replies
12
Views
4K
Back
Top