This article from IEEE Spectrum presents one of the more rational discussions and counterpoints to the whole banning autonomous weapons theme in recent weeks. We Should Not Ban ‘Killer Robots’ and Here’s Why http://spectrum.ieee.org/automaton/robotics/artificial-intelligence/we-should-not-ban-killer-robots. The article builds on and extends my own thoughts and feelings on the topic that I first described here https://plus.google.com/u/0/+MarkBruce/posts/dvsWMFLV9Vi, agreeing that autonomous weapons are a bad thing but there is no way of stopping their development and likely deployment. It asks whether autonomous weapons on the battlefield are in fact more ethical than the alternatives given they may lead to significantly reduced casualties, both combat and most importantly civilian, particularly with the hypothetical ability of autonomous weapons to follow far stricter rules of engagement better than any human.
A few quotes:
The barriers keeping people from developing this kind of system are just too low.
What we really need, then, is a way of making autonomous armed robots ethical, because we’re not going to be able to prevent them from existing.
If autonomous armed robots really do have at least the potential reduce casualties, aren’t we then ethically obligated to develop them?
Blaming technology for the decisions that we make involving it is at best counterproductive and at worst nonsensical. Any technology can be used for evil, and many technologies that were developed to kill people are now responsible for some of our greatest achievements, from harnessing nuclear power to riding a ballistic missile into space.
Perhaps the biggest surprise for me regarding this issue and the open letter that sparked this larger awareness and debate is how polarising it has been, and how many people seem incapable of rationally discussing the issues, instead preferring to assume an air of moral superiority while shouting down all who dare to question otherwise.
Philosophy and Ethics in Autonomous Vehicles
In a closely related area concerning the behaviour of autonomous vehicles on our roads I was recently involved in a discussion thread where I mentioned that philosophical “Trolley Problems” (https://en.wikipedia.org/wiki/Trolley_problem) would have to be tackled at some point with regard to the operation of these vehicles. The most basic example is when you flick a switch that results in one person being killed in order to save many people from being killed.
And, of course, we see this week that a great many people are already working on this problem with this summary article How to Help Self-Driving Cars Make Ethical Decisions http://www.technologyreview.com/news/539731/how-to-help-self-driving-cars-make-ethical-decisions/. Again, as a simplistic example, if a young child runs onto the road in front of an autonomous passenger vehicle before it can stop, should the vehicle swerve into on-coming traffic to avoid the child?
A few quotes:
Given the number of fatal traffic accidents that involve human error today, it could be considered unethical to introduce self-driving technology too slowly.
If you look at airbags, for example, inherent in that technology is the assumption that you’re going to save a lot of lives, and only kill a few.
As one of the commenters notes, the system becomes even better when all vehicles on the road are autonomous and able to communicate with each other: for example if a car swerves into on-coming traffic to miss a child then the on-coming traffic will know this and can react instantly and swerve to make room for the vehicle.
#autonomous #weapons #vehicles
Aye , it is indeed :-/
I'm inclined to agree but ask what recourse do we have? Do we hope that a benevolent superpower attains a strategic advantage with autonomous weapons and then uses that strategic advantage to enforce ethical use there-of against those who do not?
I half-agree with you that it is a weak argument . In one sense I'm predominantly focused on their development and eventual existence, which this ban seeks to prevent and which won't work for that reason. I think you're more referring to their use (not their development) but please correct me if I'm wrong, in which case they might be approached like chemical weapons and existing autonomous weapons like mines that have bans on their use (which have already been developed and in some small cases continue to be so any ban on their development didn't work).
But here's the thing: both mines and chemical weapons still get used in small parts of the world. Places like the middle east for example. Hell even Russia used chemical weapons on its own citizens (collateral) during that theater siege a couple of years ago. So in another instance they are still used anyway. You tend to see these things used by smaller states and in very niche areas. And it does make me wonder whether the reason they are not more widely used in modern warfare by modern military powers is because they simply aren't that effective in delivering strategic or tactical advantage for the sorts on conflicts that have been common for some time. E.g. what use for chemical weapons without large concentrations of many troops and what use for mines when you're sending in your own contractors to rebuild the place afterwards? Nuclear weapons are banned too; those stockpiles aren't really getting any smaller, ready to be used in dire scenarios, and new states (Iran) still desire them.